The AI 'Photography' Race Is Getting Hilarious: Enjoy The Show

The AI 'Photography' Race Is Getting Hilarious: Enjoy The Show

AI is the perfect hype commodity for tech companies and social media shills. If you thought NFTs and crypto cults were full of hot wind, then strap yourself in for the AI movement, because it’s bigger, gassier, and truly inescapable.

Luckily, the hype around AI “photography” is at least good for a laugh, so we may as well enjoy the show.

AI Will Destroy Everything You Love, But Not How You Think

Earlier this week, hundreds of AI industry leaders warned that artificial intelligence could wipe out the human race. A joint letter insisted that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Now, I’m not disagreeing with this. But it always makes me chuckle when the industry talks about AI as if the algorithms are the threat. You know, instead of warning society about its increasing dependence on hackable technology or irresponsible developers writing code that basically says “kill all humans”.

Terminators won't destroy the human race but hacked or poorly-written algorithms could, say, starve us if we're dumb enough to make food production reliant on them.

Either way, AI will destroy everything you love long before anything like this happens.

Again, it won't be the algorithms that do the damage, though. Instead, it'll be the AI tech bros forcing the AI narrative into every aspect of our lives, incessantly overpromising and underdelivering while the technology itself progresses at a relatively slow rate. We're already seeing this in photography and, basically, every creative industry.

Each week, a new AI revolution is announced, and we’re told the future is finally here. Then, the latest AI toy falls apart under the first round of genuine scrutiny, like the recently botched rollouts of Google, Bing and every other company clamouring for relevance in the age of AI.

It’s the hype that matters, though, not the results. In fact, the majority of gains in the stock market this year are attributed to AI enthusiasm. As we've seen with cryptocurrencies, NFTs and even content monetisation in the digital age, it's the enthusiasm that's profitable.

Welcome to the hype economy.

Adobe Enters the AI ‘"Photography" Race

Adobe has officially entered the generative AI race with its beta feature, Generative Fill. It's the newest shiny toy in the AI hype machine, and the usual suspects are out in force with typical overenthusiasm. Search “rip photographers” on Twitter, and you’ll get an endless stream of tweets that all use the same hyperbolic phrasing.

I mean, half the tweets are identically worded, and they’re all sharing the same examples. But who cares about originality, anyway?

Here’s a word-for-word template being shared by countless accounts:

RIP Photographers

RIP Designers

RIP Retouchers

Even Midjourney is in trouble now…

The hype is real. 🤯

All of the best threads on Adobe Generative Fill on Twitter. 👏

Be sure bookmark & share!

The hype certainly is real.

If you're familiar with the NFT hustle of recent years, you’ll recognise most of this language. In fact, a quick scroll through the comments on many of these threads reveals a mix of overexcited journalists and active or former crypto bros (plus a healthy amount of ridicule from creatives).

I'm not embedding any of these tweets because I don't want to promote them and, secondly, they all use artwork without permission, something else I don't want to contribute towards.

They are good for a laugh, though. So, I’ll share the AI-generated content they're (not) producing – that nobody can own any rights to. Maybe you can guess which pieces of original artwork have been used – without permission – to generate these expansions.

Expanding Images With Adobe’s Generative Fill

Aside from having a bit of fun, it’s always worth taking an honest look at features like Adobe’s Generative Fill tool. As hilarious as the AI tech bros are, we all need to keep tabs on the capabilities and limitations of artificial intelligence. It’s not only a question of whether they’re a threat to creatives but also how useful they may be in helping us do our jobs.

For the most part, they’re simply taking existing pieces of artwork and using Photoshop to expand them. The first example of this I saw was someone expanding the Mona Lisa painting. Essentially, we’re just getting a lot more of the same.

Can You Guess the Iconic Album Cover?

When you run out of famous paintings to butcher, I guess album covers are the obvious next step. So, let’s see what our pioneering artists have come up with using Adobe’s Generative Fill tool.

Can you guess which iconic album cover was used to generate this expansion?

This is a particularly interesting example. Aside from being one of the most famous album covers of all time, it’s also one of the most controversial regarding usage rights. The original photo used for this cover featured a naked baby in a pool of water, appearing to swim towards a $1 bill on a fishing line.

At the age of 31, Spencer Elden – aka: “The Nirvana Baby” – filed a lawsuit against the use of the image on the basis he was unable to give consent.

In terms of how well Adobe handles this, the original photo is about as easy as it gets for generative AI. Firstly, it’s a low-resolution image taken underwater, but, more importantly, it includes no lines of high-definition detail.
Anyone experienced with Photoshop’s intelligent fill tool will understand why the original image is a perfect choice for Generative Fill. Sadly, this seems like a classic case of beginner's luck.

Next up, we could be talking about the most iconic album cover of all time and a more complex image for Adobe’s Generative Fill tool to expand.

Hopefully, the zebra crossing is the giveaway clue for this one. All you have to do is imagine the UK’s four most famous musical exports crossing a particular road.

Clearly, there are a lot of issues with Photoshop’s output here. It almost looks like the algorithm has merged images from Google Maps’ street view, creating all kinds of bizarre distortion.

Whoever created this prompt doesn’t seem to mind the car on the left completely mismatching an image taken in 1969. Who cares about details when you can add a blimp in the sky and take all attention away from the four original subjects of the image, though?

All of this aside, the real victim here is the poor dog to the right of the expansion.

Thoughts and prayers for our one-legged (?) friend.

Can You Guess the Famous Meme?

Moving further away from iconic paintings and album covers, memes are also getting the Adobe Generative Fill treatment.

This was created using a meme commonly referred to as “distracted boyfriend.” It depicts an apparent couple with the boyfriend gawking at a passing lady in a red dress, much to the dismay of his unimpressed girlfriend.

The girlfriend’s gaze is firmly locked on the back of her boyfriend’s head but it appears she has bigger problems to worry about.

I’m no doctor, but that looks like a medical emergency to me.

You only have to look around the frame of the original image to see a whole bunch of issues with this expansion.

You can also see where Generative Fill is having problems when it tries to merge multiple images together. The tool is clearly trying to merge multiple images of buildings to match the edge of the frame in the original photo. Again, nobody experienced with Adobe’s Content Aware tools will be surprised by the issues with lines and details.

These examples are supposed to demonstrate the capabilities of Generative Fill and AI tools in general. However, all they really do is reveal the lack of knowledge and attention to detail of anyone praising the results. By extension, they show how important it is that AI tools are used by experts who actually know how to use them.

In this case, photographers, photo editors, and digital artists.

How Good Is Adobe’s Generative Fill Tool?

Adobe’s Generative Fill feature is still in beta, which means it could improve somewhat before any official release. Don’t expect miracles, though, because beta releases are pretty deep into the development cycle for software products. In other words, Adobe must either be fairly happy with the results or in a real hurry to get its name in the generative AI discussion as quickly as possible.

You can try Generative Fill out for yourself by downloading the latest beta version of Photoshop from the Adobe website. You also sign up for a free trial to test Generative Fill, even if you’re not an existing Photoshop customer.

Funnily enough, Adobe isn’t promoting the feature as a tool for expanding paintings or album covers in its marketing material. In fact, the first demonstration in the video below is a reasonable use case for the tool. The video starts with a creative adding yellow road lines to an image of a cyclist riding on a remote road.

The creative, then, uses the tool to add more sky to the top of the image, converting the 1:1 image into what looks like a 2:1 composite.

To be honest, the sky doesn’t match all that well to my eyes, but maybe that’s just the bias of knowing it’s AI-generated.

Unfortunately, the video descends into madness from here, placing stags in cartoonish streets and turning a legitimate landscape image into a composite mess. I can’t be the only one getting heavy macOS Sierra flashbacks from these AI-generated mountains.

Adobe is telling us to “dream bigger” in this promo video, but the botched reflections, unrealistic lighting, and clip art signs are the stuff of nightmares. Based on this video, it also seems like Adobe’s Generative Fill feature isn’t as generative as Adobe would like us to believe. When you ask it to add a reflection, you can tell it tries to use the data in your existing image.

However, when you add completely new elements or change entire backgrounds, you often end up with recognizable mountain ranges or streets. Compared to tools like Midjourney, it looks like Adobe’s algorithm is using less data (fewer images) to generate content.

The good news for Adobe is that increasing data volume should, in theory, improve the results of its output. In fact, this is the only way companies like OpenAI and Adobe can realistically improve the quality of their AI products using the current technology available.

A more significant jump in AI capabilities will require a new technological breakthrough of some kind.

How Useful Is Generative Fill for Photographers?

Adobe’s Generative Fill tool will improve with time, but I can already see some legitimate use cases for photographers and other creatives. Obviously, digital artists that don’t need realism in their work have the advantage here.

The use cases for photographers will always be more limited, though. As the technology improves, it will only get easier to remove unwanted elements from an image. You can already imagine publishers asking photographers to switch out the sky on an image, rather than waiting an unknown period of time for better weather conditions.

Personally, I have no interest in using generative AI for photography, but I still test every tool I can get my hands on. Quite simply, I want to know what they’re capable of and what they’re not.

To test Adobe’s Generative Fill, I went through a bunch of rejected photos and selected this raw file of an image taken in London last year.

I selected this image because it seems like a good candidate for using Generative Fill to expand the left side of the frame. Most of the image is shadow and light with almost no detail, except for the pattern on the window, which will help demonstrate the tool's capabilities with details.

This is the most convincing version Generative Fill produced:

At a glance, it’s done a decent enough job until you notice the smeared patterns on the generated parts of the window. In all honesty, casual viewers would probably never notice this.

So, if I desperately wanted to expand this kind of image to 4:5 and fill the left side of the frame, maybe Generative Fill is a viable option. Even still, I think I would reject such an image on the basis that I should have composed it better in the field.

Also, keep in mind I specifically chose this image because I knew it would be relatively easy for Generative Fill to work with. Aside from the pattern on the window, there is no detail required in the expansion at all.

Once you start replacing backgrounds or anything major, results quickly get messy.

For example, here’s the original version of an image I took during a rare daytime shoot in London:

So, what happens when I ask Adobe to swap out the background for a street in Paris on a sunny day?

Well, after three failed attempts to cleanly select the subject with Photoshop’s AI tools, I had to do it manually with the good, old-fashioned quick select tool. Then, I inverted the selection, hit the Generative Fill button and typed the prompt: “A street in Paris on a sunny day.”

This is the most convincing of the three generations Adobe’s tool came up with:

You can see how it’s trying to recreate the perspective of the original image, but the result is a complete mess. You can see in the bottom-left of the frame how much the algorithm has struggled with detail, and this crop from another alternative is even worse.

Let me be clear, though, I would never expect good results from a tool like Generative Fill for this kind of application. The first test, where I slightly expanded an image with no detail, is the kind of task that’s suitable for current AI tools. Switching out backgrounds and expecting quality or realistic results is going to end in disappointment.

All in all, Adobe’s Generative Fill tool is one of the least impressive AI tools I’ve tested from major providers. It will get better with time, but this rollout feels rushed. I get the sense Adobe wanted to release an AI tool as quickly as possible to put its name in the mix.

And, honestly, the quality of its output doesn’t really matter because it’s the hype that bolsters stock prices, not the technology itself.

The Hype Will Die Down, Eventually

Soon enough, the market will be saturated with AI tools, and the hype will start to die down.

The mist surrounding AI technology will gradually clear as people’s understanding of it increases – not so much the technical aspects, but the experience of using it. AI will change the way we work and live our lives, but the current technology isn't putting us on the verge of revolution.

Tech companies are exploiting the public's limited understanding of AI technology and the media's predictable sensationalism for quick profits. The dishonesty will become less profitable as people's experience and understanding of the technology increases and the narrative in the media has to change.

The hype will die down, but it’ll take longer than the NFT craze that swept photography in recent years. AI isn't a niche movement; it's already entrenched in every aspect of our lives – at home, at work, and almost everywhere we go.

So, we're going to have to put up with a lot more hot wind from the AI tech bros every time a new product or feature hits the market. All we can do is sit back, have a good laugh, and enjoy the show.  

Log in or register to post comments

Thanks Aaron for this excellent contribution. It coincides with my own experiences. In fact, just hype at the moment. In the beginning the fun factor predominates, but over time you realize how much time you wasted to achieve a result that you are still dissatisfied with. I recognize immediately whether an image is generated by AI, no matter which engine. I've been experimenting with Adobe's AI for a while. The results are sometimes horrifying. It takes a lot of time to fix the bad rendering. In the last 3 months I only had one picture that impressed me. Was my own landscape image that I expanded by 100%. The rest is very sobering. The most thing that bothers me is the massive copyright violations. Adobe boasts that it only does this with its own stock images. But the impertinence of uploading images to Adobe Stock means you automatically accept that Adobe can use the footage to train its AI. I deleted all my stock images. I can do without the meager income in the cent range. I'm not going to waste any more time in AI right now. This still has to increase enormously so that it can be used sensibly in the daily workflow. It reminds me a lot of the time when HDR was pushed and hyped. Quitsch colorful pictures from which one got eye cancer. There is nothing left of it today. At the moment, my concern that one has to jump on the AI train is unfounded. The future will tell, I would recommend... cool down. (sorry, translate by google ;-) )

Thanks for reading and commenting, Klaus. Google's AI translation has done a pretty good job, btw ;-) Interesting to hear your experiences and the point about Adobe stock images is an important one. A lot of people already sign up for online services without understanding the T&Cs and AI (without regulation) is already opening up new minefields with more to come.

Excellent description of the situation IMO Aaron.
My first thought about this "Adobe tool" was: There are even more fake pictures being produced by people who otherwise wouldn't be able to produce good photos.
It should be seen for what it is. A tool intended to facilitate and/or speed up photo post-processing.

Hi Klaus, thanks for commenting. This has been happening for many years now but ChatGPT put AI tools in the mainstream media like nothing before. Google has been telling us for years that its AI translation technology can match professional human translators, citing experiment results that are highly favourable to its algorithms – a lot of hype with underwhelming results. Instead of replacing human translators, professionals have been using AI tools to complete their jobs faster – exactly as you say.

That is a fantastic article, Aaron! I thoroughly enjoyed it. I think you're on the money here. It will take some time before this is truly useful in photography, but perhaps we're just using it for the incorrect applications because of the hype and marketing.

Hi Géran – thanks for reading and commenting. A little tongue-in-cheek but glad you enjoyed the article. Fully agreed. The technology will improve and become increasingly helpful to photographers and other creatives. In the meantime, we'll have to put up with social media personalities using it to butcher famous artworks for clout – what a time to be alive ;-)

Criticizing AI because of its imperfections is probably foolish. The tools will advance faster than we imagine. I remember the same being said about speech recognition or automatic translations. In the beginning, both were good to make fun of. The old bashing seems to stick and some stopped using it forever. We need to take AI seriously to handle it. Just thinking we are the greatest is no help. I don't know who said it, but mankind indeed deserves a lesson in humiliation.

Who called themselves the greatest here? I've been working on it extensively for the last 3 months and it's not usable at this point. And speech recognition, cough, still gets on my last nerve, Alexa, Siri and whatever they are called. Do you have an example of yourself (pictures? can't find anything in your profile) that lets me share this boundless naive optimism? Aaron has dealt with the topic seriously and very extensively. You are welcome to write such a well-researched article per AI, maybe you will convince me. 😉

I use speed recognition to text messages instead of typing all the time when there is nobody around - very reliable and quick. And you probably were reading automatically translated articles without even noticing. I did not understand what you wanted to do with my picture, but it did not sound nice. Moreover, you can find me easily in the net if you cared to search. From what I read you might be a perfect addressee for the last sentence in my comment above.

Totally agree. We're at the point where a lot of AI imaging has serious weaknesses, but it's still frequently appearing in the media and increasingly displacing professional photographers.

Nobody was really considering AI imaging at the beginning of the year... This has popped up in the last six months. Extrapolate the growth out another six months and what does it look like?

Apple, Meta and Google, have access to everyone's personal photos and videos and unlimited budgets... What happens when they finally release their products? I think the answer is a shift in imaging not seen since the development of film cameras.

What will it look like? An investment bubble. Big shirts will be lost. Because really, what is there here to sell?

Hi Rene, thanks for commenting. To be clear, I'm not criticising AI technology. This article is aimed at the narrative surrounding AI and the pattern on social media of hyping up every technology innovation. We now live in a marketing world that constantly promises the next big thing but never quite delivers – whether it's economic freedom (crypto), an automated utopia (AI), living on Mars (SpaceX) or whatever else. The breakthroughs never quite arrive but tech companies generate billions from the hype, news publications get the clicks they so desperately need, and social media influencers get the engagement they crave. When reality hits that progress in AI (as with most technologies) is *significantly* slower than the marketing narrative, we'll be talking about the next big thing.

I get the problems with the hype about everything. Mostly, it just directs efforts into the wrong direction. The mars exploration is a typical example. We should first explore earth and make it habitable again. But the AI thing will be taking off. We need to take care that it is not taking over.

Very good article, but as is the case at every technological crossroads in history, those moments of concerns will inevitably succumb to the relentless march of technology. Such crossroads are nothing new in history, although they may seem as such by those experiencing them at the time. The importance of Adobe's AI efforts at the moment are not their initial capabilities and/or limitations. What matters most is the possible future they reveal ten or twenty years down the road. These technologies have a way of cross-pollinate in a way that other disciplines besides photography (graphic artists, microchip developers, publishers, advertising industry, etc.) will most likely have an impact on the photography world beyond what we can imagine now. In fact, beyond the software, it is those chipmaker companies, together with advanced computer hardware companies, that will be on board of this new AI boat, and the photographic world will be right there riding with them full-steam ahead. This will be a worldwide, developed world competition. Photo capabilities will never be the same, but nothing else will be either. That future is coming at us all, and fast, and it's as exciting as it is scary. Can't wait to see it.

Hi Eric, thanks for commenting. Yes, it's true that the technology will improve but this is nothing new. This technology has existed for many years, gradually improving. It's only now that ChatGPT has created headlines around the world that companies like Adobe are rushing to release AI tools as quickly as they can. ChatGPT isn't new technology, either. It's a new(ish) format that implements existing natural language processing (NLP) into a chatbot interface. Progress in AI is slow and the biggest advances in the past decade have been in computational processing power, helping algorithms to crunch more data in shorter spaces of time. If we're at a crossroads (and it seems we may be) it will be the adoption of the technology, not a sudden increase in the capabilities of AI – at least, until the next big breakthrough in machine learning.

The big thing about "AI" is that it doesn't exist. There's no artificial intelligence in today's world and nothing even remotely close. These programs combine and process other people's images or text and learn nothing from it - it's just input, process, output, forget... then wait for next request.

"Training the AI" is just tech-bro BS for "copying people's stuff" in some new and different ways that they think will evade existing IP law.

Hi Jim, this is very true. The term AI has been used in a different context over the past decade to the point it no longer means "true AI".

Calling it AI is just feeding the hype. There's no magic in this software.

The thing to note is that no one in academia is talking about any sort of breakthrough in actual AI. No bombshell papers published. No buzz in the research community. Just bigger neural nets, more computing resources, new ways to process text and images.

Note how much of this AI hype is really just within the photographic world.

I think it will get better if people start falling for paying for cloud GPU. Get ready for more subscriptions or deal with restrictions and poor results.

Everyone talks about AI - and everyone understands it differently, everyone has different expectations of so-called AI. But what actually is AI? Let's just start with the word, with the term "intelligence". And here we encounter the one big hurdle: Intelligence is not defined! Quote Wikipedia: The definition of intelligence is controversial, varying in what its abilities are and whether or not it is quantifiable. So anyone can use this term in their own sense, e.g. to describe a certain performance. What is AI then? I don't know. I can't really do anything with this term. Companies use this circumstance to sell AI, to sell a service. This service is something that humans could do as well - only with much more effort. And we generally call such helpers simply: tools. But tools sound so banal, so arbitrary. AI, on the other hand, conveys something special, something extraordinary.
So far, and I stress at this point so far, the AI tools I have used have not performed in a way that I can relate to my idea of intelligence. Maybe I just have too high expectations and demands on such tools? Of course, Midjourney can generate wonderful, and on demand, flawless portraits based on millions of faces. But as with the millions of mobile phone photos, these become arbitrary over time and thus normality to a certain extent. Instead, authentic portraits of unique, real-life faces become interesting again when the perfect look has faded. Why are there actually still product photographers? Actually, CGI should have wiped out this species years ago ...

The hype passed up the reality years ago and just kept going. Basically any software doing some new, that used to seem really difficult, is now called "AI". The term means absolutely nothing today.

Thank you for the thoughtful article. As an IT person by day, what I would most like to see are models that train on and utilize as an only source my own vast catalog of images that I have taken. This to me would solve several issues that we have with copyright and intellectual property. For example, any new elements added by the generative algorithm would irrefutably be my own source input, even if transformed by the algorithm. Any output could irrefutably be attributed to my own creativity, plus the processing done by the generative AI, just as any image that I edit in Lightroom or Photoshop was up until the time that generative fill appeared in the feature set.

Also, these flaws that you call out will undoubtedly be improved over time but it is important to identify them so that these vendors can correct them.

Generative fill looks like...well, the content-aware fill function of PS.
I understand that Adobe is going to use its stock archives to help with their generative fill. Hope they pay their contributors.

LOL those contributors will get absolutely nothing. Adobe will claim this doesn't even constitute "use" of their images. Hey all they did was let their software look at them... once... and it "learned" rather than copied. And BTW here's a bridge for sale...