Before the recent advent of AI-based retouching, I struggled to make rather basic adjustments to my portraits. Using AI plugins, however, I can quickly whiten teeth and eyes, smooth over wrinkles in the fabric, and remove blemishes from a face. Portrait retouching has benefited greatly from AI-based intelligence. One area that has been neglected in this technological revolution, however, is compositing. To put it succinctly, compositing is broken. Fortunately, Adobe is aiming to fix it.
My needs concerning compositing are simple. In my headshot photography, I don’t light the background. In my home studio, this works just fine. I use the Westcott Backdrop Pro with white fabric for the background. Because the fabric receives minimal light, it turns dark gray. When I am shooting on location, I don’t always have control over the ambient lighting in the shooting space. Sometimes the background is too bright. Other times, it has a color cast.
In these situations, I use Photoshop to select the background and make color or exposure adjustments. A problem arises when I try to make the background substantially brighter than it was originally. Similarly, if the background is an ugly shade of green because the shoot took place in a corporate office, I can’t always shift it to a neutral gray. In both of these instances, there is an edge on the subject that looks unnatural when paired with the modified background. It would be ideal if I could simply select the subject from the image and place them into any gray background of my choosing. When I attempt this, it is obvious that the image has been composited.
This situation might be understandable had we not become so quickly accustomed to the marvel of AI-based retouching. Using Adobe Photoshop’s Generative AI, I can remove reflections from eyeglasses or replace a poorly knotted tie with ease. In light of this progress, it’s hard to understand why I can’t simply shift the color of the background on a headshot.
I spoke with Mengwei Ren, an applied research scientist at Adobe, about the progress Adobe is making in compositing technology. Mengwei provided me with a private preview of this technology, an excerpt of which can be viewed in the video accompanying this article. Mengwei is part of the team directly responsible for these improvements announced today as part of an Adobe Sneaks presentation. Although Sneaks announcements are presented and demoed for thousands of attendees at Adobe’s MAX conference each year, Adobe states that Sneaks innovations are works in progress. Anything announced under this umbrella is a vision for the future.
The new compositing technology has the working title Project Blend, and the expectations of what it can do are far beyond my needs and will be of interest to photographers using composite imagery for advertising assignments. There are two main problems a creator faces when making a composite. The first is the aforementioned edge issue. If light spills onto the subject from a greenscreen background, it becomes very apparent when that background is changed to a different color or a different scene. A skilled retoucher can shift the color of the green edge on the subject and blend it to match the background colors. This is beyond my skill level, and judging from the volume of poorly composited images I see regularly, beyond the capabilities of many others as well.
The second problem is more complicated. Imagine a scenario where you had taken a family portrait on a beach in sunset light, but a family member was missing. The family then supplied you with a photograph of that person and asked you to add it to the family portrait. That photograph was taken on a cloudy day under soft light. If you were to add this photograph to your original composition, the color and shadows created by the lighting on the subject would make it obvious that the person was not alongside their family members when their portrait was taken.
The noteworthy feature of Project Blend is a button labeled Harmonize. This AI-based technology modifies the lighting and color on the subject to match that of the background to which the subject has been added. Harmonize even creates shadows on the background in places where they would appear had the subject been photographed on-site. The technology works for animals as well as people. As seen in the demo video, the results are striking, and it is difficult to tell which people were originally present in a scene and which ones were added in post-production. Especially noteworthy is how the technology accounts for the position of the light source in the original image. If that light source would have created a lens flare, then a lens flare is added to the image.
Were it not for the recent AI innovations I use daily, I would be skeptical that this concept will ever reach my laptop. But when you consider my most recent article here on Fstoppers was accompanied by a thumbnail of a dark-skinned woman with natural hair using a laptop—an image generated from scratch in Photoshop by my typing, “dark-skinned woman with natural hair using a laptop”—there is every reason to believe that it won’t be long before I can instantly make adjustments to the backgrounds of my headshots. Others will use the technology in far more challenging and creative ways. I trust I am not the only one looking forward to seeing the results.
Why are people photographing....???
I don't understand what you are trying to say. Might be best if you added more words.
So you're annoyed you can't make cringey fake portraits easier?
No, I'm annoyed that when I shoot a headshot in an office I don't have the skill to get the background to be medium grey. I believe Adobe's improved compositing will solve my problem.
Ya so cringey linkedin fake portraits
While I’m not adverse to a snarky comment when one is due I feel in this instance you were rather harsh. I get the feeling the comment may have been driven by ignorance of the problem when it comes to colour matching in composite images. Older methods such as those using helper layers are both time consuming and pretty tedious and it’s not all about making ‘fake’ anything it’s about creating original artwork or images. Roll on a built in Photoshop solution for colour matching. The author was just using a fairly simple example for what is a relatively tricky problem. There is nothing fake about composite images. Creative yes fake no. One comment about all the new AI enabled remove tools in Photoshop is, be careful using and trusting them especially on hi-res images as they can produce some strange results, so always check over what has been produced as I find they quite often leave behind artifacts or witness lines that need additional work to remove them.
Why not dump the camera entirely and do everything with AI?
Are you asking me this question or is it rhetorical? If you are asking me, I think it would be foolish for me to drop photography entirely simply because there are situations where my editing ability is not strong enough for me to adjust the color of a background that was illuminated by ambient lighting. I take photographs because I enjoy doing so. Until that changes, I will continue to be a photographer.
The power of AI for the production of accurate quality high resolution images that stand up to scrutiny is way way over hyped.
AI may not be fully here yet for high resolution images, as you state. But this technology is under a year old, I believe. There is every reason to believe that this technology will be undectable in 2-3 years. Mind you, if it were up to me, I wish it didn't even exist and we had to focus more on getting it right in camera, or perhaps we would become comfortable sharing images that are less than perfect. But as long as AI is here, I will embrace it, just as I did when we went from analog to digital photography decades ago.