Adobe Max is a three-day conference in Los Angeles, CA attended by designers, illustrators, photographers, and social media content creators. Attendees can watch presentations by Adobe representatives and attend educational sessions given by independent creators. This year, Generative AI was mentioned in many presentations. This feature is implemented in unique ways across the programs that make up Adobe’s Creative Cloud, and it often feels to the user that they have more effective communication with programs that have AI enhancement.
I spoke with Deepa Subramaniam, Vice President of Product Marketing, Creative Cloud at Adobe to discuss Adobe’s intention with widespread AI implementation (often referred to as simply Firefly) and how photographers might use this technology to materialize their creative vision. AI can be used to remove unwanted elements or create desired details and scenes not present in the original image capture. The feature has been available for less than a year, meaning there is much to explore about how it might best be utilized in a creative workflow.What was the intent in introducing AI-based additions like Generative Fill across the Creative Cloud?
The first thing we did was put forward some internal tenets and principles to guide how we explore generative AI and build with it. We were very creator-first from the beginning, being thoughtful about how we want creators to play with this, and how can we support them in learning about this new technology. From day one we were transparent and open, and you can see that in how we brought this innovation to market through open public betas built on dialogue with the community.
Through the open beta process, we have learned about new workflows that we weren't even aware of. It's been amazing to see how creators are using Generative Fill in Photoshop not only for the generated output in production workflows but for ideation as well. It wasn’t clear to us that that was a possibility until this information came to us through that public data.
As photographers, we often meet new technology with a mix of excitement and apprehension. I always wish that I could utilize the latest hardware and software technology, but my clients and competition even, would not be able to.
Change brings innovation and that innovative change powers new creation, new output, and new ideas. It kickstarts the next age of that medium. We are in one of those moments right now with generative AI and photography. Some creators are really ready to dive in and start playing and we want to put that innovation in their hands. Some are more reserved, thinking, okay, I think I need to be exploring this. They understand this is not going away, but are still sort of at the start of their journey of understanding how to fold it into their workflows. And we want to foster that dialogue with those users and help them in their exploratory journey.
I think for many people, AI just seemed to come out of nowhere in the same manner that cryptocurrency and NFTs were unknown one day and seemingly in the public consciousness the next. Photographers were rightly concerned about what AI means for their artistry or their commercial business.
Our philosophy around this has always been to be very creator-centric. So creators should ask whatever questions come to their minds. The technology is there for them to understand and personalize. And it's been a beautiful thing to witness. The public betas have been the forum by which we have a dialogue with our users. We are engaging with the community through these betas to have that conversation. It's not just even about shipping the betas and seeing what happens, it's dialoguing on social media, tons of in-person events, and working closely with creators to understand how they're folding into their workflows. Hearing their questions, answering them, going back, and having discussions internally. It's just been a two-way dialogue, which I think is critical to how this technology is going to innovate and continue to be useful to creators. And honestly, I think that's a real reason why we're getting the momentum and the adoption that we're getting. We announced just yesterday that 3 billion images have been created with Firefly, a billion of that in the last month alone. People are leaning in to explore this technology because there's real usefulness there, and there's just a myriad of ways to use Firefly’s capabilities to create. And so each creator is on their own individual journey to figure out how to pull that into their workflow. And we are here to support that, to learn from that, and to build on top of that.
It’s worth noting also that sometimes photographers might be using AI and not be aware of it because it's so baked into the application. Lightroom itself has a long history of embracing AI, but not yet on the generative Firefly-powered front. There's broad artificial intelligence which Lightroom utilizes, and then there's generative AI, which is sort of a subset of AI built around the creation of new pixels. We've always explored ways to use AI to speed up workflows and just make the act of creation easier. Lightroom has a bunch of new capabilities that are AI-powered. De-noise is one. In the release that we announced yesterday, we have AI powered lens blur that can add a blur effect to an image regardless of the hardware used to create the image. Our philosophy is, let's help you, the photographer, the creator, work better, faster, smarter.
I think the problem for some photographs is the feeling that AI technology will replace them.
In the keynote yesterday, I thought it was interesting to see how digital artist Anna McNaugth had an idea for an image of a wolf in a forest. She created a rudimentary sketch similar to a drawing that a child might make. Then, she used a series of AI-enhanced features in Photoshop to create an image that fleshed out her original idea.
Imagine you took a photograph that didn’t come out exactly as you pictured it. The power of Firefly is like, what if it's not exactly what you wanted, and you want to be able to perform a generative edit that transforms the image into exactly what's in your mind's eye? Creators now have the possibility of doing that, whether it's in Express, or whether it's in Photoshop. Firefly is a part of the creative process, not a full replacement of the creative process because that's just not possible. The human has to be driving that creative process. Firefly gives you creative control, and the precision to bring to life exactly what is in your mind's eye.
Anna is a prolific creator using Photoshop for 20-plus years. She is a Photoshop expert but she is also on this journey of understanding what Firefly can enable and folding it into her creative process. One thing she highlighted yesterday was she used Firefly to help ideate at the start of the whole process of deciding what she was going to create. She was just sketching ideas as simple drawings and then she used Firefly to turn those sketches into something a little bit more visual to have feedback and conversation with her team. She used Firefly image generation to bootstrap the ideation process. And then when she moved beyond the ideation process to the actual creative content creation, she used the Firefly-powered capabilities in Photoshop, specifically generative fill and generative expand, coupled with everything else in Photoshop, compositing, selection, layers, blurring, masking, to create a final output that was actually an amalgamation of many photos, some generated, some real-life photographs. The final result was a beautiful new creation.
It is interesting that she didn’t fully understand the technology at the point that she started using it.
Right. She's exploring and seeing how Firefly can assist her creative process, and we're seeing that time and time again with the feedback we're getting through these public betas where people are. They are learning new ways of using this technology to their benefit.