All Fstoppers Tutorials on Sale!

Is Computational Imaging the Death of Truth in Photography?

Is Computational Imaging the Death of Truth in Photography?

I’ve been impressed with the computational imaging I’ve seen with the Google Pixel’s “Night Sight” mode. I’ve also been scared, because there are times where the images that it captures have no correlation with reality, and therein lies the danger of this emerging technology in smartphones.

The image at the top of this post of Bridgeport, Connecticut was taken with Night Sight. For a phone (and even for a dedicated camera), the image is impressive when you consider it was taken handheld on a small-sensor device.

However, the scene above doesn’t really exist. It’s an amalgamation of Google’s secret sauce and several exposures the phone took to composite the image on the fly. The actual scene looked closer to this photo, taken without Night Sight mode:

The same photo of Bridgeport above, minus the Night Sight mode in the Pixel 3a XL.

The same photo of Bridgeport above, minus the Night Sight mode in the Pixel 3a XL.

It’s not hard to determine which is the “better” photo. Night Sight is capable of some amazing results in near darkness (and actually seems to do better when handheld). If that’s your goal, then Night Sight delivers easily. But, what if the goal is the truth? Then that’s not such an easy answer.

Night Sight is, thankfully, a feature that can be turned on and off on Google’s phones. There are times where I even turn it on to give my daytime photos a little help with reining in the highlights and shadows for a more realistic image (which highlights that perhaps the algorithms aren’t all that consistent in how they process photos). But on Apple’s new phones, the Night mode is an automatic action, in keeping with Apple’s K.I.S.S. (Keep It Simple, Stupid) philosophy when it comes to its camera app. This means your only choice may be the “fake” image, whether you want it or not. If you are a journalist, for instance, charged with delivering truthful imagery to the public, this could potentially be a huge problem.

Logically, for business reasons, Apple and Google don’t share the secrets of what their algorithms are doing, and even if they did, it’s little comfort that they’re applied evenly in all situations. There’s no telling whether you’ll just get a little lift to make an image a little closer to what the eye saw, or whether you’re getting something that’s a wholesale created image.

For comparison, here’s what “Night Sight” did to the same scene during the day, where the algorithm decided to use all the extra information for an image with less noise and slightly more shadow detail, compared to a similar image from a camera with an actual, physical lens attached to it (in this case, a Fujifilm X-T1 with the XF 35mm f/2R WR lens attached to it):

Fuji X-T1.
Pixel 3a Night Sight.

In this instance, computational imaging helped the truth, rather than hurt it, but again, there’s no telling what you’ll get.

One could argue that camera manufacturers in the digital age have been engaging in a sort of computational imaging since the beginning; it’s how photographers get trapped in the endless arguments about how one brand’s color science is better than another’s and that the only true representation of a scene is film (but which stock?).

It’s possible that years from now, the arguments about computational imaging will fall into the same camps and that computational imaging as we know it today will simply be known as “imaging.”

What are your thoughts on computational imaging and truth in photography? Leave your thoughts in the comments below.

Log in or register to post comments


Previous comments
Wayne Cunningham's picture

At one point does the 'camera' not actually take the photo, but download a 'perfect' image stored by Apple or Google of the scene? Heck, they could even integrate temporal elements of the scene, such as someone posing for a photo, into that stored 'perfect' image. That I fear is the future of computational photography. The algorithm creating the photo is the same across millions of iPhones, so each of those phones will essentially take the same photo.

Wasim Ahmad's picture

Don't give them any ideas

JAS Square's picture

This is a rather philosophic question. It is not so much about computational imaging and the truth. It is rather a question of what is the truth. Is the same scene captured on a film (analog) more to the truth then one on a digital DSLR? What about if I have chosen the wrong ISO or any other parameter that has an influence on the outcome. Which of the results then is the truth? And if you enhance it in post, what is the truth then? Even by just framing a picture, by the composition, you may or may not manipulate the message the final picture shall transport. You may leave a detail out of the frame or may include it. Am I watching the image on a calibrated device or not? Filters - in front of the lense or in post, lightning like a flash, all of that manipultate the result.

Bottom line, there are endless possibilities of this, and true, computational photography is another iteration in that area of manipulating images.

Just my 2 cent.

Robert Enger's picture

This is just another hack to make pictures look good on incompetent hardware. (fake HDR, squeezing contrast down for display with 8-bit/ch SDR codecs and dim displays)
What a shame that this is being done as we are finally seeing early adoption of high NIT displays with 10 bit/ch panels, and codecs capable of TRUE-HDR storage.