Which image in the banner is a photograph, and which is computer generated? Can you even tell anymore? With the addition of computer generated imagery (CGI) and artificial intelligence, the process of creating images has provided commercial photographers more tools than ever. I sat down with award-winning food photographer Steve Hansen to discuss this topic and to delve into the question: "Is it enough to just be a photographer these days?"
You've undoubtedly come across a Steve Hansen image (whether you knew it was his or not). From the grocery store shelves, to the almond milk in your fridge, or in the form of numerous awards, Hansen’s work has surely crossed your path. I first came across it about five years ago when I took his class called Capturing Food in Motion. In this class, he brings us behind the scenes for the making of what he is most famously known for: his “splash” images.
If I had to describe Hansen’s work, I would string together some catchy phrase that might include the terms, “neurotically perfect,” “explosively creative,” and “a celebration of food and color.” Perhaps a more succinct description is “a creative perfectionist’s study of color and movement through food.” His work has been an inspiration of mine for many years. It was truly a pleasure to sit down and chat with him. I will share a few highlights from our talk, and you can watch the full conversation in the video at the top, where I will give you the answers to your “what is what?” image quiz.
Hansen has garnered widespread recognition for his "splash work." He has developed a widely recognized style by creating decomposed photographs of soups, yogurts, chocolate, burgers and more, with the ingredients levitating, wrapping, and splashing in the most intricately beautiful ways.
I found myself sometimes commenting on social: “Is this really a photograph?” His answer was always, "yes." It was hard to believe it, but I knew from the class I had taken how he had created these magically orchestrated pieces.
Soon though, he ventured into CGI, and the guessing games became more entertaining and challenging.
I asked Hansen what his reasons were for venturing outside of the photography world into the daunting beast that is CGI. His answer was not what I expected.
" got into CGI out of boredom. I was doing shoots in New York all the time, and I just had time on my hands in the evenings when I wasn't out. I had a project where I needed a package rendered, and I had no idea how to do that.
He reached out to a colleague in the field for the rendering. After contracting out that assignment he wondered: "Could I do this on my own? Could I use this as a tool to incorporate into photography?"
If you don’t know what CGI is, it’s essentially creating images that must be modeled and rendered as opposed to being shot with a camera. In the still world, it’s essentially an image created digitally from scratch.
Can you tell between these three images below which is the photograph, which is the computer-generated image, and which was made using AI?
Hansen expressed that the learning curve for CGI was steep.
It sometimes took me three to four hours a day studying the process top to bottom for five years at least. I use maybe 12 different software programs, each of which is its own Photoshop times ten.
CGI accounts for approximately 30% of his current work. One way Hansen has chosen to integrate the technology is by creating backgrounds for his “hero” shots. These cocktail shots are an example. These cocktails were photographed, but then placed in a computer-generated background he designed.
For Hansen, the intensive days of set builds are now often replaced with CGI environments he designs.
My big question for Steve was: with all these developments in technology, is it even enough these days to just be a commercial photographer anymore? Look at these three images below. One is a photograph, one is a computer-generated image, and one is AI. Can you tell which is which?
When I look at the work of Hansen’s CGI marvels, and Tim Tadder’s AI pieces (I interviewed him here), I find myself wondering: “Is it time to evolve or die in the world of commercial photography? Are we headed towards an expectation that creators should also be proficient digital creators?"
Hansen says no. Although things are changing quickly in our world as photographers, Hansen’s experience is that productions run similarly to how they did in the 80s: “A lot of these productions that happen are almost old-school. Technology has changed, ideas change, and the concepts have changed, but how you go about producing something hasn’t changed a lot.”
I think if CGI didn’t put photographers out of work, AI won’t either. I just don’t see it replacing anything.
As for AI, Hansen expressed that it helps him get out of creative ruts: “I use AI for creative inspiration. Throwing data at it and seeing what comes back at me kind of triggers some creative response in my head. I use it to sort of jostle my brain and think of something differently and then pursue it.” He describes the art of AI as “soul-sucking” when it’s used as a tool to create a piece from scratch. He has a segment on his website called AI Explorations.
I never call it my image outright. It’s still a very gray area as far as copyright. It’s in no-man’s land.
Closing Thoughts
An overabundance of articles have emerged discussing the potential impact of AI on the field of photography. My conversation with Hansen wasn't focused on that. Instead, our discussion aimed to delve into the question: "As commercial photographers, are we moving towards an expectation for us to acquire proficiency in these emerging technologies to maintain our competitive edge?" When I scroll through his AI explorations gallery, it seems that the answer is yes. Yes, you have to keep up with all these tools. These tools are powerful and they are the future. Essentially though, Hansen's answer was no. Acquiring these skills for him was a combination of curiosity and having extra time on his hands.
I use a variety of different tools to get to the end point that the client is looking for, that I am looking for. However I get there, I do not care. I don’t want to limit myself.
His experience in CGI and his recent inroads into AI appear as a natural progression for someone who transitioned from being a chef to a stylist, then to a photographer, and finally to a designer. His skill acquisitions seem to be born from being a perpetual learner more than a need to stay relevant. It was Einstein who said: “I have no special talent. I am only passionately curious.”
Perhaps, Hansen’s passionate curiosity is what has continued to hold his position as a leader and pioneer in the commercial food photography industry. You can watch the full conversation above.
Here are the answers to the photography/CGI/AI quizzes.
The best part is in the comments! Share your questions and thoughts below, and let us know if you guessed correctly.
All images used with permission.
Interesting commentary and the examples are hard to tell apart especially at that size.
With years of studio experience, I thought that it was possible to see real from fake by looking for tell tale signs of bad lighting ratios and blown highlights at capture. Most photographers are usually at least slightly wrong and AI creates lighting out of thin air with seemingly perfect ratios. But then I noticed so much of AI is copied from photography with bad lighting that AI can have bad lighting too. So now, I don't think I can genuinely tell the difference between an originally digitally captured image and AI/CGI.
However, I was looking recently at another photographer's 4x5 drum scan files from a Heidelburg Tango and genuinely thought they looked like real photographs. There is an overall lack of noise and the difference between dots of grain and square pixels seemed to be easily recognizable to my eyes. But that's just me today and tomorrow I might see an AI image created to look like drum scanned film and be fooled. But for now, I'm thinking nothing generated seems to have the look of superior drum scanned film especially when blessed with the luxury of being able to pixel peep and zoom in to large file sizes.
I agree with him that a lot of the advertising industry is still in the corporatized 1980s, but I disagree with him that the slowness to react will keep jobs. AI is the beginning of web3 and decentralized finance so advertising is going to be monetized differently than it has been in the big corporate dominated web2 culture. Honestly, I think dedicated photographers should start thinking about shifting to the art market for money (NFTs) and leave advertising to AI.
Thank you for this insightful reply. It used to be very obvious but month by month I can see big improvements with the technologies. The AI generated images with people in it seem to always look somewhat plastic to me though. But some of Hansens various pieces were hard for me to distinguish.
I agree with you on the litmus test: pixel peeping.
Yes, smooth surfaces like skin often seem to look like plastic and that has been a complaint from film photographers about digital photography from the beginning. I still have never seen a digital camera capture a clear blue sky without making it look like the bottom of a kids pool. AI seems to have the same problem with rendering smooth plasticy surfaces so I've just assumed that's a normal look for all digital images.
However, I am seeing something else in AI generated images that I have never seen in digital photography. In fact, it's so new to me that I don't even know how to talk about it properly so please forgive me for being loose and imprecise. In the early days of digital, I had a Kodak dcs520 camera and it sometimes made these weird patterns in textures that I had never seen before. Back then, there were no photographers that could tell me what it was. Now, every photographer would recognize it as "moire." It's possible that computer generated photographs have weird patterns in texture too that someday everybody will know about. But right now all I can say is that the pattern reminds me of diffraction.
Photographers know that their lenses work best at certain aperture and when they stop them down too far then diffraction occurs. Diffraction looks like a loss of accutance that's most noticeable in complex patterns and surface edges. I believe AI images have a very similar look in terms of a loss of accutance that's really noticeable in generated landscapes.
I know that fractals are often used to create random patterns in generated imagery and I don't understand all the math involved. But if I understand correctly then random textures appear to be made with noise using methods like this: https://en.wikipedia.org/wiki/Perlin_noise
Noise in graphics: https://www.scratchapixel.com/lessons/procedural-generation-virtual-worl...
When I look closely and pixel peep an AI image I think that it may be made up of a type of noise that results in a loss of sharpness that looks similar to a loss of accutance in lens diffraction. Then, when a smooth surface starts to look sharp in AI it also appears "over-smoothened" like plastic because there's less noise than in textures.
Check out these Midjourney v5 landscapes of Zion National Park and it looks like they're just filled with the look of diffraction from the noise. It's especially noticeable in the image of the leaf on the rock because the leaf looks sharp and plasticy while the rocks look soft and noisy.
Sorry for the long post, but to sum up the point I'm trying to make is that AI generated imagery appears sharp/plasticy when the surface is smooth while simultaneously looking slightly soft/noisy when the surface is textured.