I’ve been impressed with the computational imaging I’ve seen with the Google Pixel’s “Night Sight” mode. I’ve also been scared, because there are times where the images that it captures have no correlation with reality, and therein lies the danger of this emerging technology in smartphones.
The image at the top of this post of Bridgeport, Connecticut was taken with Night Sight. For a phone (and even for a dedicated camera), the image is impressive when you consider it was taken handheld on a small-sensor device.
However, the scene above doesn’t really exist. It’s an amalgamation of Google’s secret sauce and several exposures the phone took to composite the image on the fly. The actual scene looked closer to this photo, taken without Night Sight mode:
It’s not hard to determine which is the “better” photo. Night Sight is capable of some amazing results in near darkness (and actually seems to do better when handheld). If that’s your goal, then Night Sight delivers easily. But, what if the goal is the truth? Then that’s not such an easy answer.
Night Sight is, thankfully, a feature that can be turned on and off on Google’s phones. There are times where I even turn it on to give my daytime photos a little help with reining in the highlights and shadows for a more realistic image (which highlights that perhaps the algorithms aren’t all that consistent in how they process photos). But on Apple’s new phones, the Night mode is an automatic action, in keeping with Apple’s K.I.S.S. (Keep It Simple, Stupid) philosophy when it comes to its camera app. This means your only choice may be the “fake” image, whether you want it or not. If you are a journalist, for instance, charged with delivering truthful imagery to the public, this could potentially be a huge problem.
Logically, for business reasons, Apple and Google don’t share the secrets of what their algorithms are doing, and even if they did, it’s little comfort that they’re applied evenly in all situations. There’s no telling whether you’ll just get a little lift to make an image a little closer to what the eye saw, or whether you’re getting something that’s a wholesale created image.
For comparison, here’s what “Night Sight” did to the same scene during the day, where the algorithm decided to use all the extra information for an image with less noise and slightly more shadow detail, compared to a similar image from a camera with an actual, physical lens attached to it (in this case, a Fujifilm X-T1 with the XF 35mm f/2R WR lens attached to it):
In this instance, computational imaging helped the truth, rather than hurt it, but again, there’s no telling what you’ll get.
One could argue that camera manufacturers in the digital age have been engaging in a sort of computational imaging since the beginning; it’s how photographers get trapped in the endless arguments about how one brand’s color science is better than another’s and that the only true representation of a scene is film (but which stock?).
It’s possible that years from now, the arguments about computational imaging will fall into the same camps and that computational imaging as we know it today will simply be known as “imaging.”
What are your thoughts on computational imaging and truth in photography? Leave your thoughts in the comments below.