With the latest batch of updates, Photoshop has added a new feature to its array of neural filters: depth blur. Very much in beta, this tool has potential, but there’s clearly a lot of improvements required before it becomes worth using.
Unmesh Dinda of PiXimperfect runs you through the new feature that arrived in the latest update of Photoshop, and while the depth mapping functionality will bring lots of possibilities, it’s clear from this beta version of Depth Blur that Photoshop has a lot of work to do before it becomes useful to photographers working with high-resolution images. Dinda shows that existing tools within Photoshop can create far better results, and it’s possible that Adobe’s engineers will seek to merge these techniques to create improvements.
While Dinda’s experiences show the current limitations, Adobe’s potential to harness machine learning will only expand, particularly as more images become available. If you're wondering why Adobe has decided to roll out a beta feature that is still so far from producing good results, it's probably because its machine learning needs to figure out what works and what doesn't — notice how the dialog box asks you each time if you're happy with the results. The neural filters depend on this feedback to improve.
While it’s easy to scoff at these early efforts, it’s quite possible that in five years you will struggle to differentiate between an image shot at f/1.4 and the same scene shot at f/5.6 with some depth blur applied. Whether this will merely increase the number of images with an insanely shallow depth of field or if it has a practical application for photographers remains to be seen.
Could this technology make super-fast lenses a thing of the past? Let us know your thoughts in the comments below.