Can an iPhone Really Match Full Frame in Low Light?

Can an iPhone Really Match Full Frame in Low Light?

The announcement of Night Mode promises “low-light shots never before possible on iPhone.” Between the new mode and the addition of a truly wide lens, the iPhone is more competitive than ever. But do the shots actually hold up in the field? I tested against my Nikon Z 7, with surprising results.

Before the comment section sharpens their pitchforks, let me clarify that this isn’t a scientific test. While I tried to keep things comparable and rigorous, this test was undertaken entirely to build my understanding of the capabilities of my new phone and hopefully provide some insight into the capabilities of the new iPhones for everyone.

Two Roads Diverged

What makes this an interesting comparison, at least to me, is the two very different paths taken to create these shots. Apple is using a tiny sensor, which, even paired with a 26mm f/1.8 lens, shouldn’t be able to compete with the comparatively huge full frame sensor in the Z 7. To make up for the hardware deficits, Apple relies very heavily on software processing. Apple leverages the significant processing power of the A13 along with techniques like stacking to produce a result. Meanwhile, Nikon provides the raw file from a large sensor and fast lens, with users expected to change shooting and post-processing techniques to create their desired output.

Computational photography-focused hardware has been explored in a number of ways over the last few years, including Lytro and Light’s L16. Computational post-processing techniques have gained wider use in implementations like focus stacking and noise-reduction stacking, but hardware implementations in major cameras are still lacking. As a result, I think this test can be treated as a tiny preview of the two directions cameras are evolving towards. 

The Test

The major thing I wanted to test was the iPhone’s ability to be used as a replacement for a walkaround camera, particularly in lower light. To keep this use case in mind, I kept the setup on the Z 7 minimal, with no tripod, but using a higher ISO and VR. On the Z7 was a 24mm f/1.4 and FTZ adapter. The iPhone was entirely stock, including the use of the default camera app. Both shots have minimal post-processing, which does lend an advantage to the iPhone, but again, goes along with the emphasis on ease of use.

With all the disclaimers out of the way, let’s take a look at a shot. Can you guess which comes from the iPhone and which comes from a full frame mirrorless camera with 24mm f/1.4 lens?

The shot on the bottom comes from the Z7. Both at full resolution and resized to match the lower resolution iPhone shot, I prefer the iPhone’s output (top shot). While the fine detail isn’t there, the straight-out-of-camera look is definitely cleaner. Partly, this is because of the iPhone’s much more aggressive noise reduction, and partly because the iPhone struck a better white balance. The iPhone shot also has much higher contrast, with no easy way of tweaking that.

iPhone

Z 7

In this second set of shots, you can see how the very things that made the iPhone shot look better are now cutting against it. With a mixed lighting source, the iPhone’s higher contrast and saturation make this shot look overprocessed, particularly in comparison with the Z 7 shot. Again, the Z 7 isn’t significantly processed, so the color and contrast can definitely be refined beyond this more flat look. When tweaking the Z 7 shot in Lightroom, the lights have much more headroom for recovery, while the shadow areas have retained much more clarity.

iPhone

Z 7

This third set of shots is the perfect summary of the two different looks these cameras create. The iPhone shot has the characteristic one-dimensional rendering of a cellphone shot, with generous depth of field, higher contrast, and a preference to expose for the shadows. The Z 7 shot is much darker, as the meter was trying to save the highlights in the exposed light fixtures, but with a quick bump in exposure, opens up easily.

Z 7 with minimal post-processing

For this last one, I wanted to include a comparison of a tweaked Z 7 shot and the iPhone shot. The Z 7 shot was quickly brought up in LR to roughly match the iPhone’s contrast, vibrance, and overall exposure.

My Thoughts

Even now, a few days after getting the iPhone, I’m impressed by the capabilities. Night Mode is a massive leap beyond past iPhone's low-light ability and clearly worthy of praise. In comparison to the full frame camera, particularly at smaller sizes, it can trade blows with SOOC shots.

Where it falls short isn’t going to surprise most photographers: the lack of control inherent to a phone camera means shots can be hit or miss. The Z 7’s files have far greater latitude for processing, with the iPhone’s shots being relatively “brittle.” Additionally, the Z 7 provides far greater flexibility in choice of shutter speed and control of DoF. 

The Night Mode option isn’t great for subjects with more movement, as you’ll need to be still for at least 3-10 seconds to allow the iPhone enough time to build its image. An additional issue I noticed was the large amount of lens flare, despite the lens being factory clean. Given enough time and scratches, this could become a problem.

These aren’t deal-breakers, however, as the easy portability means the iPhone is a great choice for a pocket camera. The inclusion of a truly wide angle lens leaves most point and shoots in the dust, in my opinion.

To answer the title question: yes, the iPhone can match a full frame camera at night, but with some big asterisks on that answer. Handicapping the Z 7 in terms of post-processing is a huge benefit for the heavily processed iPhone shots, while resizing further tilts the field in favor of the iPhone. While I won’t be replacing my camera with my phone anytime soon, I’m very happy to have another tool in my bag, particularly to fill the niche of an unobtrusive, pocketable camera.

More broadly, these results should leave photographers excited about the possibilities of computational photography. While makers have just dipped their toes in the water, with support for sensor shifting and focus stacking, I believe the field has great potential. We’re already seeing new lens designs that make greater use of digital corrections for analog compromises, like distortion and vignetting. The processing power and software know-how that power the iPhone could do amazing things with a higher caliber sensor and lens.

Log in or register to post comments

60 Comments

Previous comments
Alex Coleman's picture

It is a test of what the cameras can do with minimal user-originated processing when shooting casual images in low light. Adding a tripod, post processing, external lights, or anything else changes it into a different test.

I looked at high ISO tests for every camera for a decade before buying a pro body. Now I realize how silly that was considering you need good light to make good photographs. There are exceptions to the rule but show me a good photo where you turned off all the lights because you heard your camera has llow noise at a high ISO.

What iPhone?
.

Night mode was introduced on iPhone 11. Mention of the "truly wide" lens also means that it must be an 11 Pro that has three cameras. (Note, night mode does not work on the extreme wide angle lens.)

Alex Coleman's picture

Good detective work. And yes, unfortunately night doesn't work on the wide angle, which is disappointing. Guess they have to save something for the 12 Pro...

I love computational photography and as an iPhone user since forever, still prefer the Pixel's slam-it-out Night Sight to Apple's more conservative Night mode. But it sure is terrific to have these capabilities on our phones.

Alex Coleman's picture

It's really impressive how phones have advanced at the marginal situations. Wide angle lenses, telephoto, low light...

Good test. It's actually pretty common to see smartphones winning this kind of test, generally because the tester doesn't really know how to use a pro camera. And the phone diesn't stand a chance if the photographer's going to shoot some extra shots and employ their own stacking techniques in Photoshop.

I wasn't surprised about white balance. Well, for one because you're shooting raw, you have no actual white balance SOOC. You set one in post. But also, I was recently reading a bit about Google's Night Sight, and specifically, their color processing. They actually get sone pretty awful, noisy, bad color results directly off the sensor. In the process of their Night Sight image fusion algorithm, they have a deep learning AI model that is used to set whitepoint and tweak color. I imagine Apple has a very similar AI. And yeah, fooled in mixed light, and responsible for the increasingly common photos that look fine as stand alones, but with color and shadow rendering only barely suggested by the origonal scene.

Francisco B's picture

Idk about photos, but the low light performance with video is still pretty bad with the iphone. I shot some B roll with the new iphone for a book opening, and even with a very well lit interior space the video footage was very noisy.

As usual, video needs much more consideration when it comes to lighting than photography. If not to lower noise, then to direct the viewer's eye.

Apple did mention at least one video mode that's taking two-shots for every frame. But when you're counting on a 9-shot computational composite for anything beyond bright light for your photos (both Night Mode and Deep Fusion use 9-shot stacks), it should be obvious you're not able to do much for video.

The newer sensor chips, those ridiculous 40, 48, 64, and 108 megapixel Quad Bayer chips, usually have some kind of single-shot HDR mode, where 2 or 4 pixels in the 4 pixel group each get their own exposure. That could certainly be used for video, though I do wonder if four 800nm pixels at different sensitivities is really better than one 1600nm pixel.

Alex Coleman's picture

I think the justification for those crazy high count sensors is that even if the binning results in slightly lower efficiency in lower light, they'll never perform well in that range anyway, while the higher resolution can come into play in good conditions.

Black Z Eddie .'s picture

Doesn't the Z 7 have that low-light feature where it takes several images for a one image jpg (I shoot Sony so I know nothing of Nikon). Since the premise of the test was walk-around minimalism, wouldn't that have been a more apples to apples comparison had that feature been used on Z 7? This way, you'd be comparing in-camera low-light processed image vs in-camera low-light processed image.

Alex Coleman's picture

It does not. You can shoot a focus stack, with the camera shifting focus automatically, but not a noise-reducing stack.

Interesting this hasn't caught on or been improved in more cameras.

My old Canon 6D had a 4-shot mode that would either add or average the shots, your choice. The adding part is the basic function in Google's Night Sight, and for HDR, they don't shoot brackets at all. Of course, like most of these functions on a camera, you did need a tripod, and the 6D would only produce a JPEG, though maybe some of the more expensive Canons delivered a merged raw

Alex Coleman's picture

I think there's a lot to be said about what's missing from the firmware of these cameras. It'd be great to see expanded support for shooting things like this, or bracketed shots that are automatically grouped in LR, for instance.

While shooting an image for stacking is easy, for higher resolution cameras, you'd probably have to do the actual composite on your computer anyway, since the processor in an iPhone is miles ahead of the processor in a camera.

These are just static objects. Try to shoot some action in low light! At least it compares the brilliant software in an iphone with the raw file in camera. At least they should be processed the same or just show us the unprocessed raw files with exif.

Benoit Pigeon's picture

Bingo. That's why I ask for files but never seem to get them. Yes phones are getting as good looking as DSLRs or mirror less, but that is if you remove all conditions that are not in favor of phones. It goes first by showing at lower res which can render a slightly blurred image almost perfectly sharp, case of a car for example. I personally like to see the structure of the pixels organization and right away you can see the difference between the two. And then you have this following video where top of the line phones do really weird things. Check 3:45, I don't think two cameras would have that much variations, and then it becomes more evident that there is more work behind in processing than capturing the actual images when you stop at 3:58 and 4:14. Both are 12mp. Even my old 3mp Canon D30 didn't do that bad. - https://fstoppers.com/gear/apple-iphone-11-pro-max-vs-google-pixel-4-xl-...

Hey downtown Gilbert! A great place to take photos. :D I just got the new iphone as well, I'm excited to see what it can really do.

Alex Coleman's picture

Always good to test out a camera where there's plenty of good restaurants!

Manny Pandya's picture

I'm a little confused. Did you compare the z7 raw files to the iphone jpgs? If so, this makes no sense to me; apples and oranges, as they say. I understand you wanted to evaluate the "walking around" capability of the platforms but why, then, didn't you program the z7 to shoot direct to jpg and evaluate those files against the iphone's? My understanding: Raw files are meant to capture the most possible data the chip can produce from the scene. To accomplish this, they sacrifice what we deem as more pleasing aesthetics to ensure the most data per pixel is captured, with the understanding that you'll "grade" the photos in a processing software afterward (same as "Log" files in a professional model video camera). A jpg is an already graded raw file, it's just that instead of a human pushing the sliders, an algorithm is doing it for you. So comparing the two is apples and oranges to me (forgive the pun); both fruit (images), but vastly different tastes (aesthetic looks).

There is no doubt that computational photography is the wave of the future. It has already changed the aesthetic expectation and preferences of the public to where professionals will be forced to use it to meet those expectations in order to sell images. Therefore, it will eventually leak it's way up into pro-body cameras. But there will be a switch, so we can turn it off when we want (on another subject for another article/discussion at another time, I predict the public taste will eventually tire of the hyper-hdr, over color-saturated, extreme-contrasty look that's in vogue right now and eventually swing back toward a reality based aesthetic). Those of us who like to print big, will certainly enjoy the benefits of having computational photography applied to a file from a full-frame or medium format sized sensor.

Now that would be a compelling exercise, and article... find the rogue apple imaging engineer who's willing to apply Apple's computational photography algorithm to a full-frame or medium-format sensor'd camera and evaluate the results.

Alex Coleman's picture

The raw files were processed with a Lightroom profile matched to the Z7's picture control settings. While that's not pixel identical to a JPEG from the camera, it'll look the same for these purposes.

In camera computational photography is going to be quite a ways off for Nikon/Sony/Canon/etc, at least at the same level that phone manufacturers are implementing it. Both for software and processing power reasons - which personally is fine. As long as the camera supports an easy way to shoot a focus stack, I'm happy to edit it on a computer, where I'll have much more control.

Great test.. I've always wondered about this and how iPhones have touched/changed our business.

Alex Coleman's picture

It's really getting to the place where I could see shooting and using the results for some purposes. Between a wide angle lens and better performance in marginal light, I feel a lot less limited to the old phone paradigm of bright light and a 28mm style view.

You should've also tested the Pixel 4 in night sight alongside. Pixel can save RAW files shot in night sight. Without it, it basically sounds like an ad for Apple, who no doubt have made a big jump from their poor showing last year, but remember that Pixels had this level of low light photos (including RAW) for about a year or so already.

David Love's picture

Anything besides Apple or Sony isn't allowed here and will get them banned.

Alex Coleman's picture

It's not a big conspiracy. I own an iPhone, I own a Z7. I shot this test with an iPhone and Z7. If Google wants to send over a Pixel to test, I'd be just as happy to compare them.