There's been a long-held belief that higher-megapixel cameras produce more noise in low-light situations. Here, different models of Fuji, Canon, and Sony cameras are compared side-by-side to prove that high-megapixel cameras do not produce more noise than lower-megapixel models when lighting conditions are poor.
When I began traveling in earnest around the turn of the century, I started collecting digital cameras. Unfortunately, backpacking shenanigans and questionable decision-making along the way meant that I parted with almost all of my first few purchases, sometimes involuntarily. However, I have managed to keep a bunch of Canon cameras, going way back to the Rebel T3i. It goes without saying that as I've increased the megapixel count of my cameras all the way up to my current 45 MP Canon EOS R5 mirrorless body that I use today, the details and image quality has improved commensurately. But what about noise performance?
That issue brings us to this great video by Tony Northrup, in which he addresses what he believes to be a myth: that high-megapixel count cameras produce more noise in low-light situations. To test his theory and prove his hypothesis, Northrup takes two Sony cameras, two Canon cameras, and two Fuji cameras and compares the noise in identical images from each model. For each brand, one camera has a lower megapixel sensor - around the 20 MP mark - while the other has a higher megapixel count. Northrup says his results prove it is indeed a myth that higher-megapixel cameras produce higher levels of noise in low-light situations. Give the video a look and let me know your thoughts. Do you agree?
It seems that different generations of sensors are being compared.
It's probably fair to say that a ten-year-old sensor design with fewer pixels has as much noise as the latest sensor with more megapixels.
Besides, if this were REALLY true, full-frame zealots would no longer have anything left to shout at Micro Four Thirds fans, would they? :-)
(BTW: despite what guys who draw pretty graphs have to say, I'm finding the new BSI stacked sensor in the Olympus/OMDS OM-1 really does have less noise and more dynamic range than previous Olympus 20 MPx cameras.)
He's just restating obvious things in click-bait fashion.
What do you expect from Northrup?
Exactly this !
This weekend I saw a review of Sigma 60-600 on Sony A7R V, and it has a lot of noise, at iso 3200
https://www.dpreview.com/sample-galleries/4086060977/sigma-60-600mm-f4-5...
That's actually pretty clean and normal. The problem is you are pixel peeping, especially a 60 meg file. When viewing normally on a 27" screen, there's no noise. If you print on a magazine, there will be no noise. If you upload to social media, definitely will be no noise.
But If you don't do pixel peeping, any aps-c camera photo will be ok at social media, screen, magazine.
Oh, no! The horror! Try not to faint, but, they are also ok at images at the side of the bus and billboards. I know, right? World has gone crazy. How could we allow this to happen? :D
The ugly, secret truth about megapixels is that people spend a lot more money and carry a lot more weight to have more megapixels, primarily so they can crop most of them away.
Get it right in the viewfinder, and you don't need more than 20 megapixels.
Well, his results are obvious, but based on non-linear data.
The RAW that comes out of the camera is not a classic untreated negative. The comparison he makes is useless in that he is not processing native sensor material. Basically, it is as if he were comparing the JPGs of the cameras. Just like these, the RAWs have gone through the internal processing of the respective camera. And thus no one, except Canon, Fuji and Sony, knows what exactly was done to the real sensor data at higher ISO values. Just because you don't see the changes in the result doesn't mean that nothing happens there. And that something has to happen there is not a myth, that's just physics - even if we don’t see the differences in the final result.
And basically, he mentions this „error“ directly at 05:12 when he says that more pixels require more signal amplification. And that means that more small sensors on which the signal must be amplified have a higher quality loss than less amplification when the pixels are larger. All this may look the same in the end, but you might see it especially in the dark areas, from which you can squeeze out less dynamic range. Unfortunately, he did not make such comparisons with the pictures he took, since all the sample pictures were not taken in "bad“ or dark lighting conditions.
In short, if the advertising promise says that the camera is better in low-light situations, then you should also compare low-light images if you want to prove that this is not true. But with no pixel-peeping, he’s right, that this advertising arguments are less heavy than years ago.
It is insanely easy to try it for yourself, DPReview has a perfect tool for this: https://www.dpreview.com/reviews/image-comparison
You can even download the raws and play with them at home in your style.
I am a physicist and I can tell, this is not a myth, at least not if you are asking the right question. What is interesting in the end is the so-called signal-to-noise ratio (which is what many people will call simply "noise", because images are normalised to get the right exposure - see below). You have a certain signal, say a sharp edge with a certain difference in brightness, and you have the noise which also makes a certain random difference in brightness between pixels which would have the same brightness in the absence of noise. If you take the ratio of the two, you get the signal-to-noise ratio. If this ratio is close to 1, i.e. the signal has the same brightness difference as the noise, you will have a hard time to see the signal (edge) clearly.
Now, if you make the pixel area smaller, the signal gets smaller as well. This is pure physics, no way to change this with sensor design or anything. You will simply get less light into the pixel, because it covers a smaller area. The noise on the other hand stays mostly independent of the pixel size, because it is dominated by the readout electronics. Hence, if you build the ratio of the smaller signal to the same noise, you will get a smaller (=worse) signal-to-noise ratio.
Now you need to normalise the image such that the exposure is correct again. Less light per pixel means you need to either get more light on the sensor (i.e. open the aperture), take a longer exposure time or increase the amplification of the read out signal. The ISO setting (which is the amplification of the signal) is defined in such a way that any particular number (e.g. ISO 100) always means the same thing for the photographer (same image result with same aperture and exposure time) rather than a particular technical amplification. Hence a camera with smaller pixels must have a bigger amplification at ISO 100 compared to a camera with bigger pixels at the same ISO setting.
The amplifier does not distinguish between signal and noise, so a bigger amplification means that also the noise gets bigger. Hence, as a result you will get more noise from a camera with smaller pixels, if everything else is left unchanged. In reality, you will never find two cameras which just differ in the size of their pixels, so this is hard to observe in a real world experiment.
There is one scenario though where my assumptions are not correct: For long term exposure (minutes, like in astrophotography), the noise might no longer be dominated by the read-out electronics but by thermal effects. This type of noise accumulates over time, so longer exposures will show more of this noise (while the read-out noise stays independent of the exposure time). Since this type of noise acts directly on the sensor pixels itself, it will scale also with the pixel size. Hence smaller pixels will show less of this nose, but also less of the actual signal (so the ratio stays constant). So if you are heavily dominated by this type of noise, smaller pixels will not make it worse, up to the point where the read-out noise gets dominating again (if the pixels are very small).
So if we are seeing high-pixel-count cameras have a similar noise (signal-to-noise ratio) than low-pixel-count cameras, it merely means the low-pixel-count camera could do a better job.
Thank you for explaining so well what's actually going on! I think the issue is that people expect it to linearly change depending on how many pixels there are, when it doesn't necessarily. But there are so many other aspects of a camera's sensor that have a much greater effect on how well it handles noise. A sensor's age and included technology have a much greater overall effect.
If you look at DXOMARK, while it certainly isn't a perfect indicator of a camera's performance, they have graphs available showing exact SNR at each ISO level. From that, it's clear that cameras of the same generation have differing SNR depending on resolution - there's a reason why the Sony a7s from 2014 nearly ties the Sony a7III from 2018. It took four years in that case for sensor technology to improve enough that a 24mp camera could surpass a past 12mp camera in noise performance. That's not some objective assumption, that's a technical measurement comparing signal and noise.
It's not like the Northrups ever really have a good understanding of stuff like this though. A few years ago they posted a video vehemently arguing that focal length and aperture values should change depending on intended sensor size. When focal length and aperture are literally physical characteristics of a lens, independent of sensor size completely. They also made the argument in the same video that ISO should be gotten rid of because not every brand is perfectly identical at the same ISO, despite the fact that it's literally named after the International Organization for Standardization (ISO) as a standard for film sensitivity (meaning irregularities are on the brand, and have nothing to do with ISO). I really don't like them, because in my experience they tend to make under-researched videos that express very subjective opinion - or even sometimes literal falsehoods - as objective fact.
https://m.dpreview.com/articles/5365920428/the-effect-of-pixel-and-senso...
This article explains why the difference in reality isn't as pronounced as in theory. On modern sensors read noise is low, and shot noise aka photon variation that doesn't scale with pixel size can play a bigger role. Also even with similar tech smaller pixels tend to have a bit less raw readout noise, compensating for their amplification, so end result is close.
You are right, I neglected shot noise, but it seems modern high-end cameras are already in the area where it plays a dominating role. It scales roughly with square-root of number of photons, so 4 times the pixel size gives you half the noise (and not just a quarter). Still, the noise does certainly not stay constant. There is significant improvement with increasing pixel size.
Nice explanation!
One thing you did not explore is that the noise performance of sensors constantly improve with improving technology.
Part of the noise problem from conventional sensors is that analogue pixel data is multiplexed and sent to a row-common analog-to-digital converter (ADC). This presented thousands of opportunities for noise to enter the row-bus analogue data stream, as well as opportunities for "pixel jitter noise" as the individual pixel for different rows could arrive at the ADC slightly out-of-sync.
New "stacked" sensors have an ADC per pixel, on the back of the sensor chip. By putting the ADC both physically and functionally "closer" to the photosite, noise is further reduced, and pixel jitter is eliminated.
In this article, different sensor generations were compared. OF COURSE a ten-year-old sensor is going to have worse per-pixel noise performance than a current-generation sensor!
Yes exactly. The improvement of modern sensors is independent of the pixel size. Or to put it differently, the improvement is eaten up by reducing the pixel size as well.
For the most part I agree with his conclusions, however they're some exceptions. For example, my Fujifilm X-T5 produces far more noise in low light than my X-T3, which produces more noise than my X-T2. Fine details are retained better, but the overall image is noticeably worse before post processing the RAW files. Likely this is more of how the camera processes the RAW data and the amplification process, but the sensor is a major part of the equation. In typical Tony Northrup fashion, he keeps recycling the obvious in an attempt to get likes. This makes him super cringe-worthy. With every new camera release, he swears that it's better than the others. IMO he's both annoying and untrustworthy. As for his comparisons, wouldn't it make more sense to use the same camera brand with the same generation of cameras (Sony a6600 vs a73 vs a1 vs A7R3?) Then compare the RAW files from the different sensor sizes with various megapixels?
Who is tony northrup and why would anybody care about a click-bait influencer's opinion on anything?
If you zoom in 200%, 300%, 1600%, of course the lower of the MP is going to look much softer. Tony should know better.
I don’t think he busted the myth. If anything, he admits the higher MP has more noise.
https://youtube.com/clip/UgkxW1AHyAVTXfoLpShSfKButOIKglzAbF0Q
Then, has to drastically clean up the noise and increase the sharpening to make up for the noise reduction he applied. smh
These tests were supposed to prove higher MP do not produce more noise, not how well you can clean up the noise. The default settings should have been left alone.
You're right: he didn't bust a myth. But he's a clickbaiter, one of the worst on YouTube, so he used a clickbait title.
To be fair, Tony didn't create the "myth" title.
I have nothing against these guys. Typically, I may defend them, or leave it alone because I have no opinion or I just don't care. But, in this case, his testing and observation methods were ludicrous for the point he was trying to make.
Is his method of testing good ? He's testing in (relatively) good light at ISO 25600, 1/800s, f=4. Would it be better to test in bad light (candlelight) at ISO 6400 , 1/125s, f=2.8 ? I'm asking because I think the latter method would be a more normal situation (think concerts, bars, low light restaurants etc.) and because noise is more often a problem in bad light than in good light.
Would the latter testmethod give a different result ?
Since he is comparing completely differently sensors (+readout electronics), even changing the ISO very slightly might change the result. Most modern cameras have "magic ISO numbers" at which the noise drops again compared to smaller ISO setting. Hence hitting one of these in one model while hitting the ISO setting right below such magic number on the other might give you misleading results even when comparing cameras of the same generation. At least this effect won't be present at very high ISO settings, so in this aspect he is doing the right thing (although probably without knowing)...
Take 2 sensors with the same technology but different megapixels: the Canon R6 and R5. Use the DPReview image quality comparison tool. set to raw and one of the highest ISOs. Then it's obvious that the R6 has 1 ev better noise. Tony is a nice guy but technology is outside his skill portfolio.
"Tony is a nice guy but technology is outside his skill portfolio."
Yes, his continual blatant dismissal of µ4/3rds has shown that. Even when he descends to the point of actually trying one, his confirmation bias kicks in.
It's not so clear. Press comparison button (match smallest selected image) and R5 has same noise than R6, but R5 has a lot more sharpness than R6, look at the cards, for example.