Is Computational Imaging the Death of Truth in Photography?

Is Computational Imaging the Death of Truth in Photography?

I’ve been impressed with the computational imaging I’ve seen with the Google Pixel’s “Night Sight” mode. I’ve also been scared, because there are times where the images that it captures have no correlation with reality, and therein lies the danger of this emerging technology in smartphones.

The image at the top of this post of Bridgeport, Connecticut was taken with Night Sight. For a phone (and even for a dedicated camera), the image is impressive when you consider it was taken handheld on a small-sensor device.

However, the scene above doesn’t really exist. It’s an amalgamation of Google’s secret sauce and several exposures the phone took to composite the image on the fly. The actual scene looked closer to this photo, taken without Night Sight mode:

The same photo of Bridgeport above, minus the Night Sight mode in the Pixel 3a XL.

It’s not hard to determine which is the “better” photo. Night Sight is capable of some amazing results in near darkness (and actually seems to do better when handheld). If that’s your goal, then Night Sight delivers easily. But, what if the goal is the truth? Then that’s not such an easy answer.

Night Sight is, thankfully, a feature that can be turned on and off on Google’s phones. There are times where I even turn it on to give my daytime photos a little help with reining in the highlights and shadows for a more realistic image (which highlights that perhaps the algorithms aren’t all that consistent in how they process photos). But on Apple’s new phones, the Night mode is an automatic action, in keeping with Apple’s K.I.S.S. (Keep It Simple, Stupid) philosophy when it comes to its camera app. This means your only choice may be the “fake” image, whether you want it or not. If you are a journalist, for instance, charged with delivering truthful imagery to the public, this could potentially be a huge problem.

Logically, for business reasons, Apple and Google don’t share the secrets of what their algorithms are doing, and even if they did, it’s little comfort that they’re applied evenly in all situations. There’s no telling whether you’ll just get a little lift to make an image a little closer to what the eye saw, or whether you’re getting something that’s a wholesale created image.

For comparison, here’s what “Night Sight” did to the same scene during the day, where the algorithm decided to use all the extra information for an image with less noise and slightly more shadow detail, compared to a similar image from a camera with an actual, physical lens attached to it (in this case, a Fujifilm X-T1 with the XF 35mm f/2R WR lens attached to it):

In this instance, computational imaging helped the truth, rather than hurt it, but again, there’s no telling what you’ll get.

One could argue that camera manufacturers in the digital age have been engaging in a sort of computational imaging since the beginning; it’s how photographers get trapped in the endless arguments about how one brand’s color science is better than another’s and that the only true representation of a scene is film (but which stock?).

It’s possible that years from now, the arguments about computational imaging will fall into the same camps and that computational imaging as we know it today will simply be known as “imaging.”

What are your thoughts on computational imaging and truth in photography? Leave your thoughts in the comments below.

Wasim Ahmad's picture

Wasim Ahmad is an assistant teaching professor teaching journalism at Quinnipiac University. He's worked at newspapers in Minnesota, Florida and upstate New York, and has previously taught multimedia journalism at Stony Brook University and Syracuse University. He's also worked as a technical specialist at Canon USA for Still/Cinema EOS cameras.

Log in or register to post comments
33 Comments

In my opinion it depends on what should be achieved with the picture. In my eyes, it ist absolutely legitimate to try and achieve the "prettiest possible picture" at the given moment. If that is your goal, you will work your a.. off in lightroom to lift the optics of your initial capture. So using the automatic algorithms of the camera/phone just is an automatic version of that (athough I still prefer the artistic work of the photographer in post). If your goal is to show reality on the other hand, either way to "improve" the picture is just a kind of forgery.

Whatever we think of it, this is something that can't be stopped. This will run its course.

While true in certain ways, I also believe we use this type of thinking to throw up our hands and say, "it's not worth the effort to counteract this." It turns out there *are* some steps which can be taken in most forms of media insofar as digital watermarks, for example. While a required watermark automatically embedded into the image of "this was taken using Night Mode by Google software" might not seem worth the effort to press tech companies on, I think it's the kind of thing we as professionals ought to lobby for.

I think this is an amazing idea, actually.

Isn't a long exposure, or shallow DOF, or even using Velvia film in some way falsely distorting what our eyes see in reality?

Sorry Wasim and with the greatest respect, I don't get it. I can, in a few seconds, produce the same affect in Lightroom and maintain as much nighttime atmosphere as I want. Am I missing something - please note I have not spent anytime making fine adjustment to the attached.
If your premise is that the photo app is the undertaker of truth in photography then I must absolutely agree with you. I think this has been the case for some time and can be seen on any social media outlet today, especially with the heavy hands that I see at work with over manipulation especially in the area of over sharpening, colour rendering etc. I am drawn to a quote I often use to my students from Sontag's book 'On Photography' - “The painter constructs, the photographer discloses". In the main, photographers are now constructing their images and sadly, not always relying on their own talent.
This issue has always been problematic in the area I work in which is reportage, documentary.

Hey BM, with all your respect, I do think you are missing something.
Have you used Google's Night Sight mode? If you haven't it's pretty amazing. I downloaded it on my Android phone couple months ago and have taken pictures that are almost pitch black with some light available. Again, to the naked eye, pitch black that you can't see what the colored walls are... When I take the picture and I see what comes from the screen I am like "woaahh... how?". I would need a tripod, a camera that can do long exposures, take 3-5 images, take those in Lightroom, blend them to create an HDR image, etc... Or, I can take out my phone from my pocket, wait like 5 sec and I have an image pretty much ready to post on social media without having to do anything else :)
For the other setup, I would have to carry my camera, lens, tripod, and then have access to a computer with a program that I can create HDR images and I would need TIME.
Or I can do this in a matter of seconds with the phone I already have.

PS. I know the limitations from Google Night Sight images coming from my Android phone, and I know I can create HDR jpeg images from my DSLR to speed up the process by I hope you get the point of the ease of access and the speed from using a phone.

--"...I don't get it. I can, in a few seconds, produce the same affect in Lightroom and maintain as much nighttime atmosphere as I want."

Really? You think these are the same? Your atmosphere is littered with RGB noise.

Photographs are not now, and never have been, a literal recreation of reality. This didn’t start with the digital age. Ansel Adams’ analog images often looked like reality, but were highly manipulated to be more beautiful. So, get over it.

Thanks David - you reminded me why I stopped viewing the fstop site.

You didn't.

No it isn't, but the problem occurs when this becomes the default setting on the most popular 'cameras' in the world. When even the quickest, simplest snapshot is a composite.

Still, the default procedure for any photographer in these kind of conditions is the bracket and create an HDR. So what the Pixel does is simply taking the hassle of doing this in Photoshop/Lightroom away from the photographer. The intent of the photographer is still the same. The end result also.That it's easier is undoubtly true but then again, autofocus is no different.

That's not necessarily true. Not all photographers want a bracketed HDR image.

Autofocus doesn't change the look of the scene. When I started photography I had to remind myself while editing that not everything is supposed to be a well lit image. Some images are supposed to be dark and moody, especially if that's what the scene looked like to the eye and you're trying to capture it. I think that much like with instagram type filters, we're going to have a generation grow up with HDR night images seeming normal.

> on Apple’s new phones, the Night mode is an automatic action (...) This means your only choice may be the “fake” image,

This is not correct. The Night Mode can be disabled moving the slider that sets the timer, right above the shooting button, all the way to the left.

You can't turn it on, and when it is activated, you have to manually turn it off for each photo which is my understanding of it?

Kind of: not for every photo, but every time you open the camera app again.
You can turn it on similarly, but if it's a bright day, it won't be on by default. It decides when to be turned on based on the environment, but then you can turn it on or off.

Computational Imaging is just the current trend with cell phones. I don't see fancy snapshots being the death of photography until commercial, food and fashion industries rely on them.

i think ever since digital cameras began to be available and affordable, more and more people got into it, and with technology advancing, more and more talent-less people got into business and are being able to provide acceptable results, this trend will only accelerate

No matter what technology delivers into the hands of the photographer, it is done to the skill of the photographer to choose the right tools to produce the final image.

Until technology can do that too, and it WILL, and SOON.

It is the death of seeing our real surroundings. A photo primarily shows what's there and what can be captured by the sensor; "AI" (that's what's sold to us) on the other hand can only ever invent things on top of that, based on what it's been trained with. It cannot show reality, just likelihood.

The term "AI" is IMO a fraudulent misnomer. All it really is is interpolation and extrapolation based on statistical methods on a scale that's not been readily possible before. Computational imaging is more correct, but simply putting a dress on the pig.

It is the death of allowing the photographer to interpret the scene.

There is documentary photography. But few photographers actually want or need to practice it. Most of us consider our photography an artistic pursuit (regardless if we're bad, good, horrible or great at it). As such, we are interested in an interpretive approach to our shooting.

What bothers me about Photoshopped images is it discards the one significant element of photography that distinguishes it from painting: The emotion of the scene at the moment it is captured. You'll never convince me that a photo you edited on your computer while sipping a glass of wine in your studio reflects the emotions you felt at the top of the mountain you just climbed. Computational photography addresses those false narratives. You now have the opportunity to truly create the image while you're experiencing it. Bring it on!

If you look at the often beautiful shots on this site, could anyone take a guess how many of them weren't in one way of the other manipulated. Whether it is in photoshop or a build in feature in a camera, the end result isn't the reality.
On this site I have watched hundreds of head shots of people and I guess 99% of them had some or major edits done to them. The only difference is a human interacting or a programme doing it autonomously.

The problem here is using our eyes as a baseline with which to compare cameras. Now that cameras are beginning to produce results that our eyes aren't able to compete with, it's easy to label the results "unnatural" or even "unreal", when in fact they're just a different interpretation of reality. The ability to achieve these results isn't good or bad in itself, only that it can be used for better or for worse.

It is not about retrieving detail from the actual scene that the eye couldn't capture. It is about details that *the sensor could not capture* getting filled in with artificially generated information based on a data set used to build a limited statistical model of reality. It is an uncontrolled (artistic?) process that very explicitly removes the image from reality.

If you are happy to give up artistic control to what's ultimately equivalent to a fancy Instagram filter, that's fine; but selling it as something reproducing reality is not.

IMO big camera manufacturers should start hiring some experts from the Smartphone industry and include many of those features on their cameras. Phones and computational photography aren't going anywhere (if not better every year). it is a fact that the laws of physics regarding lens construction can only be bent up to a certain point (light is light) resulting on a bigger and brighter lens always giving a better image than a small lens on a smartphone. BUT, on the other hand, the new iPhone and Pixel phones do in camera (or in phone?) what we need to do in Lightroom. Including saving all the steps for bracketing exposures, noise reduction, sharpening, etc. So IDEALLY, a future mirrorless system could incorporate the simplicity of the process and the benefits of creating the same base file a modern smartphone can offer with a much greater latitude on post and obviously better image quality overall.

In the digital era, the survival of the strongest has shifted to the survival of the quickest to adapt. I would hate to see camera manufacturers going out of business BUT as a true photography lover, I have witnessed the way they have been 'milking the cow' for years, selling us small incremental improvements as 'new models'. My D750 does 90% of what my Z6 can do. This means that many folks do not see a need to upgrade. If the want to survive, they need to offer exponential improvements this time...

At one point does the 'camera' not actually take the photo, but download a 'perfect' image stored by Apple or Google of the scene? Heck, they could even integrate temporal elements of the scene, such as someone posing for a photo, into that stored 'perfect' image. That I fear is the future of computational photography. The algorithm creating the photo is the same across millions of iPhones, so each of those phones will essentially take the same photo.

Don't give them any ideas

This is a rather philosophic question. It is not so much about computational imaging and the truth. It is rather a question of what is the truth. Is the same scene captured on a film (analog) more to the truth then one on a digital DSLR? What about if I have chosen the wrong ISO or any other parameter that has an influence on the outcome. Which of the results then is the truth? And if you enhance it in post, what is the truth then? Even by just framing a picture, by the composition, you may or may not manipulate the message the final picture shall transport. You may leave a detail out of the frame or may include it. Am I watching the image on a calibrated device or not? Filters - in front of the lense or in post, lightning like a flash, all of that manipultate the result.

Bottom line, there are endless possibilities of this, and true, computational photography is another iteration in that area of manipulating images.

Just my 2 cent.
J.

This is just another hack to make pictures look good on incompetent hardware. (fake HDR, squeezing contrast down for display with 8-bit/ch SDR codecs and dim displays)
What a shame that this is being done as we are finally seeing early adoption of high NIT displays with 10 bit/ch panels, and codecs capable of TRUE-HDR storage.