I’ve always known about Google’s Night Sight mode on Pixel phones as a method to take pictures in near darkness, but it seemed somewhat overkill in daylight. As it turns out, it can actually push your pictures to DSLR-like levels of detail and sharpness if you use it right.
I’ve touched upon how Night Sight can help prop up tough shadows in daylight photos in the past, with the implication that it could also do the same for difficult highlights. That much is a given, but also not something that’s all that different from HDR (High Dynamic Range) modes that have been around for years before. But what caught my attention as I’ve spent the last few months going through raw DNG files from my Google Pixel 3a XL is just how much detail I’ve been getting from photos shot with the phone’s Night Sight mode.
The scene above is one I’d mentally note to come back with my DSLR for later, but on a deep, pixel-peeping dive in Adobe Photoshop with my phone, I was so happy with the results that I simply didn’t. I can pretty much see every bit of frost in the car and every grain of detail in the overgrown weeds. The only practical difference a DSLR would get me is finer control over depth of field and more choice of focal lengths. Sure, extra megapixels would yield extra details, but when one of my main DSLRs is a 12 megapixel shooter (A Nikon D700), results won’t be all that different.
To take a look at another example, here’s a rather mundane shot of our new car. The shadow and highlight retention improvements are apparent even without going 100 percent on the photo, but so are some of the sharpness differences:
Just take a look at the detail in the bushes behind the car and in the foreground grass to get an idea of how Night Sight improves the sharpness even in this photo. If you zoom in 100 percent, you can see the sharpness differences even in the non-moving car. The computational imaging prowess of the phone just creates a sharper image, period. It's a small difference, but it's there, and as photographers, anything that helps this sort of thing is (generally) a welcome improvement.
The secret sauce is in the blending of multiple images (anywhere from 6 to 15 images) where it goes beyond just lifting shadows and preserving highlights. Obviously, with this many exposures being taken, if there’s an extreme amount of movement in the photo, this just won’t work, but if there’s slight movement, it can actually work as the software figures out the sharpest parts of the image to use. The downside is, the exposure time shown in the EXIF data of the file is the exposure time for an individual capture of those multiple images; there’s no way to tell after the fact which photos were captured using Night Sight.
Google does a deep dive on its AI Blog about how this all works, and some of its A/B examples even show night-sight being used to capture more detail, but it’s something that photographers looking to get the most image quality out of the least camera (their phones) should certainly consider, especially if caught without their DSLR or mirrorless cameras.
Have you found other useful applications for Night Sight mode? Share them in the comments below.