Artificial intelligence is pervading every corner of our digital lives, and Adobe is powering their editing software with Sensei, their artificially intelligent tool. But so far, it's lacking the ability to deal with the common issue of sensor spots automatically in Lightroom, which would make our lives much easier.
Digital image editing is changing at a rapid pace. With new features and tools being added to software every month, we photographers and editors have to constantly learn and hone our practice with each new introduction. But now, there's a new wave of AI tools hitting the world of photo editing, and it's removing the need to have such in-depth knowledge.
Adobe has introduced AI technology in the form of Adobe Sensei. Sensei is set to bring about a huge change to many of Adobe's software platforms, including photography and video editing applications such as Photoshop, Lightroom, Premiere, and more. Photoshop CC sees a powerful first look at what Sensei can do for your day-to-day image edits with beta filters in the new neural filters range, but I feel as if there could be a bigger shift in image editing when this kind of intelligent filtration hits Lightroom.
For anyone that has struggled with sensor spots knows, removing these blemishes from all of our photos is time-consuming and frustrating. Product and commercial photographers and retouchers also bear the burden of spot-removing tiny dust specks that litter the frame, and it's tedious work. So, why hasn't AI stepped in to automate this process? By using Adobe Sensei, we could quickly reduce the sensor spots and dust in photographs without the need to spend hours cloning them all out. So, let's take a look at how that might work if Adobe were to roll it out to Lightroom.
Remove Sensor Spots
We already have the option of visualizing spots using the Spot Removal Tool (keyboard shortcut Q) and then painting with the tool to manually remove spots and blemishes on photos; you can see my other post on removing sensor spots in Lightroom to find out how to do that. There's no reason (at least in my mind) why this couldn't be automated. In fact, Adobe could also open this up to removing dust and scratches as well. I see that the neural filters in Photoshop are already teasing us with a photo restoration option and a dust & scratches filter, but they are not yet currently available at the time of writing.
It's brilliant to see that Adobe is heading in this direction already, and I'd like to see this integrated into Lightroom and Lightroom Classic. I use Lightroom about 80% of the time, with 20% of the remaining editing time spent finishing images in Photoshop with tools and features that I don't have available in Lightroom. So, including this kind of automated processing through the use of artificial intelligence tools like Adobe Sensei would really cut down a lot of time for myself and others that need to do this kind of processing on a daily basis.
Give us Sliders
If you use Lightroom Classic to remove sensor spots or dust from your images, you'll probably already be aware of the ability to visualize spots when using the spot removal tool. This tick box also has a slider that increases or decreases the sensitivity of the visualize option. This same feature could be utilized by Adobe Sensei when removing spots, although it would take some work to bridge the gap between making spots more prominent and teaching the machine to actually recognize them to subsequently remove them.
A slider is an easy way to alter the sensitivity of the spot removal AI feature, and a single slider might be best for Lightroom CC, where a simple user interface is desired to homogenize the app across devices and make it easier for non-professionals to edit their photos. However, a slightly more complex version could be available for Lightroom Classic users that want a little more control when they edit their images. For example, three sliders could allow editors to adjust threshold, radius, and amount of spot removal with relative ease, in a similar way the Dust & Scratches filter works in Photoshop CC.
The Option to Remove
Now, if you've seen my other post about Photoshop's neural filters, you'll know that automated photo adjustments aren't quite the way to go just yet. We can't rely on software to develop our photos for us yet, despite the myriad automatic options for fixing photos being available in image-editing software.
This manual adjustment to change or remove some of the changes could take the form of an adjustment brush such as the one that already resides in Lightroom. Simply mask out the sections you don't want altered, and it would only take a short time to perfect the filter, rather than going through and removing every little speck. The process could look like this: apply the dust removal AI tool, make some short tweaks with the sliders, and then apply selective adjustments using an adjustment brush to remove affected areas that don't look good.
Conclusion
While Lightroom doesn't currently have this feature, I suspect it won't be long until Adobe rolls out Sensei fairy dust across their image-editing suite to provide us with artificially intelligent image editing. We can see this with the October update of Photoshop CC and its inclusion of neural filters, with more AI-led filters hopefully right around the corner. It's very exciting to see such a wave of artificially intelligent image editing come to the fore, and I'd like to see this in Lightroom as well because as I mentioned right at the top of the piece, I edit the majority of the time in Lightroom and find it much better at batch image editing than Photoshop.
The main image made in part by Mikemacmarketing used under Creative Commons.
good article.
AI can't distinguish shade of Grays, White, Blacks it's the same as AI to remove backgrounds it only works with contrasting colors but you can count on healing brush using the Lighten mode.
like Topaz re mask try it on silver and gray or blonde and white
Sorry, the scientist in me forces me to reply:
While I hate that the term "AI" is used all over the place for mere convolutional neural networks, that statement is just wrong. It's called machine learning for a reason. The basic idea is to "teach" the machine what a [face, cat, dust spot] is by showing it ground truth data, labeled by a human (have you noted the captchas all over the internet? You're basically assisting in the training of neural networks). In the simplest form it's a yes/no classification game for the whole image, but pixelwise labeling works as well. During that process the machine "learns" what features make up a [face, cat, dust spot], depending on how data is represented and how the neural network is designed, this is much more than edges, local contrast etc. and can include all sorts of information such as color values.
Still, for edge cases where even a human has difficulty differentiating elements, it takes a lot of tweaking the structure of the networks and tons of ground truth data. And you still don't get a guarantee that it works properly in the end.
Apart from that, Jason's idea is absolutely sensible.
I find that automatic sensor dust removal even more viable on Photoshop CC with its "content aware delete" function. There wouldn't be any global adjustments, just find and select, and under certain parameters, perform a content aware delete.
Adobe has not added this feature yet because accountants can not code. Adobe's most important resources are in the finance department. 😂