What Is 'Computational Photography' Anyway?

What Is 'Computational Photography' Anyway?

Whether you've realized it or not, photography is moving away from pure optics. For the past few years, smartphone cameras have been relying on computational photography to overcome their physical limitations. But what does that even mean?

Over on his site, Vasily Zubarev has written a serious breakdown trying to answer that very question. He covers how smartphones have pioneered the use of computational photography to match, and in many cases, surpass the capabilities of DSLRs, by automatically applying techniques like image stacking, time stacking, and motion stacking, and using trained neural nets to adjust the final images. Smartphones are now taking pictures that are, if not impossible, seriously difficult to replicate with a purely optical camera. 

And what we're seeing in the current generation of smartphones is only the tip of the iceberg. Every facet of photography, from lenses to lighting and focus to filters, is currently being replicated and improved upon using computers and computational techniques in the lab.

"What is a photo?" is already a hard question to answer when auto-improved, auto-HDR, optically impossible smartphone images are becoming the norm. And it's only going to get harder. 

For the full breakdown, go read Zubarev's great article. Though fair warning: there is the occasional bit of bad language.

Harry Guinness's picture

Harry Guinness is a writer and photographer from Dublin, Ireland—though you'll rarely find him there. His work has been published in the New York Times, Popular Science, and dozens of other places.

Log in or register to post comments
5 Comments

I am waiting for iPhone to come out with lens specific bokeh effects, like "Zeiss Opton Sonnar 50mm @ f2" or "Schneider-Kreuznach 85mm Tele-Arton @f5.6". Of course they probably would not get that specific because most if the users would not know what those lenses are!

This already exists (sort of). Check out the camera app called Focos. It uses the iPhone's portrait mode and lets you adjust focus point and depth of field after a photo has been taken. It also has simulations of various vintage lenses (including bokeh effects).

In know of the App, but I did not know it had lens simulations. My daughter just got the 11, so need to check it out. Thanks.

Drifting deeper into The Matrix...

The human eye (as well as the amazing eyes of hawks and owls) are heavily computational. The optics as well as sensor resolution is strong only in a small central area, everything else is filled in by computation.

[Horsehoe crabs have eyes that work well in starlight up to sunlight, the retinal actually reconfigures itself to the ambient light]