A few days ago, camera industry guru Tony Northrup published a video arguing that in the age of digital photography, ISO is effectively meaningless and that it’s no different from dragging the exposure slider in Lightroom. Photographer Dave McKeegan has offered a response and argues that Northrup’s logic is completely wrong.
Like many others responding to Northrup’s video, McKeegan’s point hinges on the fact that the camera, in processing the signal from the sensor, is multiplying that data before it is converted from analog data into digital data. This is what Fstoppers' own Lee Morris was suggesting might be happening when performing his own tests last week, albeit without knowing the science behind it.
In effect, the exposure slider in Lightroom is dealing with completely different information than a camera’s ISO setting, thus creating a different outcome. As one of the comments on Northrup’s original video observes, sensor signal is to raw as raw is to JPEG. Essentially, exposing incorrectly and relying on editing software is definitely not recommended as a means of exposing your digital images, and adjusting your camera’s ISO setting is a better option.
If you’re interested in the technological aspects, be sure to watch all of McKeegan’s video. Beyond that, if you’re still keen to know more, you might want to deep dive into the comments on both videos. You will almost certainly want to check out the comments made in response to the Fstoppers article, paying particular attention to informed contributions from community members Gary Gray and Paul Gosselin.
Whatever the outcome of this discussion, it’s useful to have an awareness of how cameras and editing software deals with information differently, as well as having an insight into how ISO functions as an industry standard, albeit with various hangovers from the film era.
Apologies for the long post, I just could not find a better way to illustrate the point. (And, whether it is necessary to say or not, I have an engineering background in electronics and computer science.)
I have a deep respect for Tony Northrup and Dave McKeegan, but neither mentioned the fact that the analog signal has many more levels of charge than the digital file can record as the A/D (analog to digital) converters have a limited number of bits (up to, I think, 14 for the best cameras at the moment, which can record 16 384 different levels of light from 0 = black to 16 383 = white — assuming a black and white only sensor, but the same goes for each colour of an RGB sensor). The sensor may be able to literally generate millions of different levels of charge per pixel, but these analog levels of charge are binned into groups by the A/D converter to prevent the digital images from becoming too big and because high-speed A/D converters are much more difficult to construct the more bits they have.
To illustrate the difference between analog boosting of ISO and digital boosting, think of a hypothetical, non-ISO-invariant camera that has a 2-bit A/D converter, which means a digital image could show four different levels of lightness per pixel: 0 = black, 1 = dark grey, 2 = light grey, and 3 = white. The camera's sensor has four pixels that can record 16 levels of charge each, where 0 = black, 15 = white and the other levels are different shades of grey. In the conversion analog levels 0, 1, 2, 3 become 0 = black in the resulting digital file; analog levels 4, 5, 6, 7 become 1 = dark grey in digital; analog levels 8, 9, 10, 11 become 2 = light grey digitally; and analog levels 12, 13, 14, 15 become 3 = white digitally.
This camera records an underexposed image where the analog charge levels are
Pixel 1: 2
Pixel 2: 5
Pixel 3: 6
Pixel 4: 7
When converted into a digital raw image, these pixels get the following values:
Pixel 1: 0
Pixel 2: 1
Pixel 3: 1
Pixel 4: 1
Trying to get a better result, we could do as Tony Northrup suggests, enhance the digital raw image, in this case by multiplying by two (one stop), resulting in the following digital values:
Pixel 1: 0
Pixel 2: 2
Pixel 3: 2
Pixel 4: 2
The resulting digital image, just as the underexposed digital image, does not show much detail, as the differences between the analog values of pixels 2, 3, and 4 are lost.
Boosting the analog signal by a factor of two (one stop) instead would result in the following analog values:
Pixel 1: 4
Pixel 2: 10
Pixel 3: 12
Pixel 4: 14
Converted to digital:
Pixel 1: 1
Pixel 2: 2
Pixel 3: 3
Pixel 4: 3
Three different values in the output of the analogly boosted image means it has more detail than the digitally boosted image with only two different values. The difference between the two would clearly be much more obvious if we had more pixels and more values to choose from, but I think I may have used enough space here as it is.
Thank you fellow engineer. This comment should be the whole article.
Maybe a few classes of Digital Systems would help a lot of photographers not to swallow every "truth" they see on the web.
Damn.........now I got a headache................(teasing)
In most cameras, the quantization rounding is lower than the noise level and is not the dominant issue.
See https://photographylife.com/iso-invariance-explained for a good discussion. In short, there is a combination of 1/ actual iso invariance because cameras achieve high ISO digitally rather than with analog gain, 2/ the backend noise (noise that occurs after analog amplification) is very low on modern cameras and therefore high ISO don't have that much to save by not amplifying that noise.
After reading the articles, watching the videos, and reading the comments, my conclusion is that I'll continue to shoot the same way I did before this all came to light. :-)
I never heard of Tony until he got popular spouting BS. Now, I just skip on past. At least he is more interesting than Ken Rockwell.
The sensor does not change sensibillity. If you adjust the iso from 100-400, this will not make the sensor four times more sensibel to light. The magic happends when the information from the sensor is prosessed in the camera into raw- or jpg files. This is also why different camerabrands with the same sensor will not look the same. Different prosessing of data from the sensor.
This si altso why photoshop, lightroom etc can not give the same result as in camera prosessing. These programs only get the raw files to work with, and these data is already prosessed in the camera. But if these programs could get data direkt from the sensor, they could in theory produse the same quality.
So in theory you could take all pictures at iso 100, and adjust the iso later in the computer. This would only be possible if you could get real «raw» data from the sensor, and not the prosessed raw file like today.
To understand optics and digital sensors, you need to do that math. And the math is not trivial. And by the way, mirrorless cameras are terrible.
I like this explanation best: https://youtu.be/2sshGdMgJxQ?t=1195
Garden test where I have apparently skipped ISO 200 :) with ten years old full frame camera; in every case, the results do not look same at all...mainly right curtain area is horrible when shot on lower ISOs pulled up afterwards.
RAWs processed in Lightroom without a touch, composed and exported in PSP, JPG 85%.
Maybe the solution is: Have the sensor and camera manufacturers list the signal to noise ratio of their sensors in the camera specifications. This will give a better description of the quality of the sensor. If the SNR of sensor "a" is exceptionally low, then maybe ISO 100 post processing amplification might be equal to sensor "b" with a higher SNR value and using a higher ISO.
Wow, this ISO subject was taken to a level I never expected. Lots of arguments and angry people...over ISO. It's a nice talking point, but don't understand the emotional responses. At the end of the day, learn to shoot as close to a correct exposure as you can with whatever gear you have. Some cameras look a little different at whatever ISO than another. Learn to go with it.