Last week, Adobe rolled out a few updates for Photoshop, including a new addition to its array of Neural Filters: Depth Blur. Right now, this tool produces some pretty bad results, but given time it could affect how much we crave expensive lenses with huge apertures.
Adobe introduced Neural Filters last year, the most headline-grabbing of which was the ability to add fairly realistic aging or slightly disturbing smiles. Tucked away were a few filters that were built on Photoshop’s ability to determine depth within a scene, drawing on the knowledge of Adobe Sensei. Sensei is Adobe’s newly developed machine learning that takes your image, uploads it to the cloud, and then attempts to make intelligent deductions based on its library of millions of images.
The original release of Neural Filters included the option to add haze and being a big fan of photographing fog in a forest, I had a brief play, before quickly realizing that the results were shoddy and didn’t give it any further thought. Haze is now included as one of the sliders in Depth Blur and sits alongside some fairly major changes. Photoshop gives you the option to quickly (sort of — my average turnaround was about a minute) knock the background out of focus and give the impression that you have created a photograph that has a much shallower depth of field. If you’re a fan of bokeh, don’t get too excited just yet: the results are not great, but there’s a reason for that.
Can Photoshop Catch Up With Smartphones?
Phones have been using AI to replicate a shallow depth for a few years, and the results you see on a tiny screen are passable for social media but tend to fare poorly when you zoom in. Edges can be smudgy, and complex areas such as hair can be hit and miss. Fortunately, the overwhelming majority of users don’t notice and are simply happy that a portrait suddenly looks a little less like it was shot on a phone and is a bit more cinematic. It’s fun.
Photoshop, however, is a serious tool for manipulating high-resolution images, so you’d expect that when Adobe decides to launch a similar feature, it would have it nailed. On the contrary: this is a beta version, and it does a pretty poor job. Just like smartphones, edges can be confused, and hair is a problem. Right now, high-resolution results are a long way from being acceptable, which feels a bit odd given that Photoshop’s selection tools are incredibly sophisticated. Why? Because despite both being in Photoshop, these are two very different and separate technologies, and AI has a lot of catching up to do.
The Magic of the Depth Map — or Not
Generating a depth map from a two-dimensional image is no easy task, even when you have Adobe’s computing power. Images with distinct layers (e.g., tight crop of person standing in foreground, mountain and sky in background) are relatively straightforward, but trying to make a machine understand how a surface gradually extends into the distance is challenging, and if that surface has a complex texture, then the results will often be jarring, as this image demonstrates.
The two images below demonstrate the issue. On the left is an image shot at f/1.8; on the right is the same scene at f/5.6 (Stefan almost managed not to move, bless him) and with the Depth Blur filter applied.
The image on the left was shot at f/1.8. The image on the right was shot at f/5.6 and then run through Photoshop's Depth Blur Neural Filter.
You can see a few more examples in this excellent video from Unmesh Dinda of PiXimperfect, who has provided an insight into the tool’s various shortcomings. Creating examples where Depth Blur does a poor job is not difficult.
If It's so Bad, Why Has Adobe Released It?
So, why is Adobe publicly beta-testing a feature that performs so poorly? There's a clue in that question. My assumption is that Adobe needs to give its machine learning the opportunity to do exactly that — some machine learning. Each time you use one of these filters, Photoshop asks you if you’re happy with the results, and that all gets fed back into the system. In time, it will improve, and as discussed below, that’s when things will get interesting.
Where It Works
In theory, converting an image with a fairly shallow depth of field into something even more shallow will be comparatively simple, as many of the edges that it struggles with will be sharply defined against the background and therefore easy for it to identify, while areas that are in gentle transition won’t feel so jarring when more blurring is applied.
This is my experience following some early testing. If the depth is not too complex and there is already some out-of-focus drop-off around some of the edges (such as hair), you might be able to put this filter to good use.
On the left: 35mm at f/2.8. On the right, the same image with the Depth Blur filter applied at 100%. Ignoring the fringing over her right shoulder, does it feel a bit more like f/1.4?
A second example:
85mm at f/1.8 on the right and with the Depth Blur filter applied on the left.
Here's the layer that the Depth Blur filter generated:
Here's a 100% crop, with and without the filter:
No doubt there are parts of the image where Photoshop has struggled (again, shoulders seem to be problematic!), but given how bad some of the other images have looked (again, see the PiXimperfect video), this is impressive.
Final Thoughts
The world wants bokeh. Phones are trying their best to create it, lens manufacturers promise creamy backgrounds and smooth balls, and photographers frequently save their pennies to bump themselves from a more affordable f/1.8 version to the ludicrously expensive f/1.4 version, if not f/1.2. In five years, Depth Blur might soon be at the stage where a chunk of prospective buyers will settle with the cheaper option or simply choose to shoot with a greater depth of field to ensure accuracy of focus, safe in the knowledge that Photoshop can work its magic later on in the matter of a few clicks.
A situation where wide aperture lenses are no longer so coveted might be hard to imagine given that Depth Blur can’t figure out where someone’s hand stops and a mountain begins, but technology moves fast. Right now, it's easy to get bad results from this early version, but used on the right image, it's not too disastrous, and there are signs that those huge, super-fast lenses might not be as coveted in the not-too-distant future.
Have you played with Photoshop's latest feature? Let us know your experiences in the comments below, along with whether you think Depth Blur might eventually change the way that we shoot.
Great, informative article. Also VERY cute dog.
now wait a min Mr. Northrup. My ex wife looks very nice, not cute but shorter hair on her legs and face.
Thanks, Tony! Stefan is the best, assuming you can get past the fact that, being a rescue dog, he bites strangers. A lot. 😬
The vast majority of the time I am not interested in a computer shortcut. I am very comfortable with computers. I have made a living with them and that part of me meshes nicely with digital photography but I want to think. I want my choices to matter.
I like the process of photography, the thought patterns, etc. I'm not saying I try to get everything right in camera but I choose my lens, my ISO, shutter, aperture, framing, all of it with a process in mind. I don't like the idea of just pointing and clicking at things willy nilly and having a computer doing whatever it will after the fact.
I agree. I feel like this sort of tech will satisfy some casual photographers, but not the more serious type. Even if the tech is PERFECT, in the back of my mind, I would feel like I have cheated or something.
If you pixel peep long enough you will eventually see actual pixels - I use it professionally - I used to use a Hasselblad H6d with a Tilt and Shift HTS1.5 on an Orange Dot 80mm for commercial portraiture - now I use both blur and the tilt and shift in PS to achieve not the just the same but better results - and I do not need to use the Tilt And Shift adaptor - which is a bonus as it is quite fiddly - on set this process would take 4 minutes of everyones time per shot - sometimes more- now I can do it in a chair in the studio while drinking coffee - and of course it is even better than that as the process can be used to select which areas are out of focus or T+S'd as it were. Have my clients noticed? No of course not. Have I? Yes I have more time at home - its a win.
Why not just add a dedicated 'depth sensor' - a secondary phone cam lens ( how much would that cost considering every cheap to midrange phone has an assortment and cameras and flashes already have their own? ), lidar sensor and so forth to cameras that specifically capture high quality depth info and incorporate it in every raw file that can be used by image processing software? ( both for in camera and post processing ) One day it'll probably just be another variable like color temp and exposure that can be accessed and adjusted creatively to one's liking.
Obviously camera companies have thought about it (Sony for example makes practically everything so no way they haven't considered it) and are simply not doing it for as long as they can to keep lens sales steady. It's not like cameras don't already contain sensors and processing to capture all manner of image related info for focus and tracking, it's only when someone 'disrupts' the market that the rest will grudgingly admit defeat and then compete in the new product/tech/feature category that will then exist beside lenses.
Perhaps we'll see this technology trickle down from photoshop into lightroom which could then use depth maps created in smartphone photos (or edit them in photoshop via lightroom).
iPhone has this sensor but it's not helping much imho.
Food for thought: https://www.diyphotography.net/the-klens-one-turns-any-camera-into-a-lig...
As the author says, it's not perfect, it's still learning and still in beta. But I like what I see so far and surely it can only get better.
What would be nice is if it could simulate pretty bokeh patterns rather than a flat, uninspiring blur.
Blur by itself is pointless. It's useful for the subject vs background separation, but there is a catch. If you blur too much - instead of a subject "pop" you'll get something quite opposite. The trick is in the subject edges. The "pop" happens if the edges are razor sharp but the background is still blurry. If this software cannot even remotely figure out the edges then I'm not sure what's the point of it? It's probably better than nothing, but it should be used with manual adjustments. Like duplicate the image, apply this filter, and then blend. Same stuff as for Gaussian blur or lens blur, but this one can probably detect the edges better, so a little less work.
I use the "portrait" (aka "fake bokeh") mode on my iPhone sometimes. Now they even have a depth sensor, but the results are still (and pretty much always) laughable.
I never got much of a chance to explore this new feature. This latest Photoshop update crashed my workstation six times in the first few hours, and I reverted back to the most recent version. Looking forward to exploring it, eventually.
If you are a real photographer you know how to blur the background. Thus you won't need software to do it for you.
I don't know. I think at the heart of everything, light is, and will always be the most important thing. That is what photography IS. Shallow DOF can be very useful for isolating a subject, making it appear as the most important thing in the image, but at the end of the day it is a tool in the toolbox. Maybe I'm wrong, I don't know. Just my thoughts.
How can an optional utility that might have creative value, but that no one has to use, be "terrible?"
Another tool for lazy photographers.
Either do it right in camera or just click a button in LR afterwards.
Reminds me of the joke: There are two types of photographers...