Small, affordable cameras with small sensors and small lenses are doomed to produce images with deep depth of field, but what if you could add shallow depth of field in post?
You've probably heard that current iPhones have "Portrait Mode," which mimics shallow depth of field by creating a depth map of an image using multiple cameras and then adding a realistic blur to a background.
Surprisingly, the iPhone does an incredible job of mimicking shallow depth of field by not just cutting out a subject, but also adding a depth map to both the background and the subject. Notice the slight blur on Patrick's arm.
Luminar AI recently came out with an update that claims to reproduce "portrait mode" on your computer. To test this software, I took the same image of Patrick with my Sony a7S III and Tamron 28-75mm at f/22 and f/2.8. The image shot at f/22 was brought into Luminar AI and blur was added.
The results, although not perfect, are quite impressive, especially considering that the results are almost instant and automatic. Is this effect good enough for professional use? For printed work, I don't think it's quite there yet, but for low-res social media posts, I'm not sure anyone will be able to tell.
It's really exciting to see how photo software has progressed in the last few years. It may not be perfect yet, but what happens when it is? Will "professional" cameras be necessary in a few years?