Bit depth is one of those camera specs that is confusing for a lot of newer and even some experienced photographers. From capture, to file format, and even editing mode, this video gives a quick introduction to what bit depth is.
Bit depth refers to the amount of color information represented in an image. At its basic level, a 1-bit image would only be able to show black and white. Higher bits are capable of showing color information, and the higher the bit depth, the more colors can be displayed. Currently, most cameras are capable of recording anywhere from 8-bit to 16-bit. So obviously, we want a camera that can shoot at the highest bit depth available right? Well, it gets a bit more complicated than that.
In this video, Matt Granger asks the question: does bit depth even matter? Granger starts off doing a great job breaking down just what bit depth is, how cameras use this information to display color in images, and how bit depth is displayed in various file formats. He then goes on to point out how most images are displayed on the internet as 8-bit JPEGs, while most monitors are only capable of displaying 8 to 10 bits.
It is in raw file editing that higher bits become important. If you were to take a 16-bit image and convert it to 8-bit, typically, the average person won't see a difference. Most people will have trouble seeing any difference in gradation above 10 bits, so unless pixel-peeping 8-bit gets the job done. However, when you take that converted 16-bit to 8-bit image and start to bring up the shadows or pull down the highlights, you may start to see noticeable gradation. That's why even though you will most likely export your final images as 8-bit JPEGs, you want to do your edits in a higher bit depth whenever possible to give you the best leeway in those edits. Of course, how drastic your edits are and how much you try to push those edits will vary from person to person. Even when editing, a lot of photographers probably wouldn't see much of a benefit from high-bit-depth editing if those edits are basic adjustments.
So, what does your workflow look like, and how does bit depth affect your images?
>He then goes on to point out how most images are displayed on the internet as 8-bit JPEGs, while most monitors are only capable of displaying 8 to 10 bits.
Since JPEGs and monitors use gamma curves, but sensors don't, those numbers aren't the same thing and should not be compared. It takes a 12 bit sensor to capture the dynamic range of an 8 bit JPEG using sRGB. That is why sensors are rarely less than 12 bits.
Can you elaborate on this? In my understanding of all of this, dynamic range has nothing to do with how many bits are being used sine dynamic range is just the ratio of the difference between the darkest and lightest possible capture. Theoretically a 1-bit image is capable of just as much dynamic range as a 16-bit image, there just wouldn't be anything between the white and the black. The bit depth is just an expression of how many values are being used between those brightest and darkest points. The gamma curve is just how those bits are allocated across the spectrum.
Is this a capture vs display confusion on my part?
I agree with Michael G. Rather than try to elaborate on it myself, I suggest you read the "Explanation" section of the Wikipedia article on Gamma Correction:
Dynamic range is loosely the ratio between the brightest pixel value and the lowest *nonzero* pixel value. If you try to define it as you have, i.e. the ratio between the highest value (e.g. 255) and the lowest value (0), then of course they're all the same (i.e. infinity) because you're always dividing by 0!
A gamma-compressed signal like a jpeg pixel has a larger dynamic range than a linear signal of the same bit depth because in the gamma-compressed signal the brightness difference between 254 and 255 is much larger than the brightness difference between 0 and 1.
I think we are talking past each other a bit here. And I do think it has to do with capture vs display.
Most modern professional photography sensors are capable of 13-15 stops of dynamic range (and are thus however many bits are required for that) and function in a linear fashion, with each successive stop requiring twice as much illumination information as the last. This component is somewhat irrelevant to the discussion at hand as what we are talking about is the Analogue to Digital conversion of that sensor signal into an image, or a RAW data file.
The bit depth of this conversion is what's being discussed in the video. This conversion is almost always going to involve some kind of logarithmic interpretation because even 14-bit RAW doesn't contain enough code values for linear encoding of 12 stops of dynamic range. On the video side of things there are only 2 recording formats that are truly linear, RED RAW and Sony RAW/X-OCN, which are 16-bit. Sony tried a 12-bit "linear" RAW which was underwhelming and required some pretty significant reallocation of bits to achieve.
Now once this conversion is done the dynamic range of the image is set, the bits just determine how many discrete tones are in between black (0) and white (255 for 8-bit, 1023 for 10-bit, etc). Black and white are going to have the same luminance value regardless of the encoding depth. Technically black (0) and white (255) would have infinite dynamic range, so we really talk about the black and white clipping points. So 8-bit encoding has 256 tones between black and white while 10-bit has 1024 tones, even though the white and black luminance values are the same. The more bits, the more discrete steps there are between any two luminence values, but there are always steps, it's just whether or not we are capable of perceiving them.
We haven't even gotten into the display side yet, whew. But it's basically the reverse of the capture. The display takes those bits of data and creates an illuminated image, and is subject to all of the same artifacting problems. The question at hand is where are the limits being imposed or exceeded. On a 10-bit display with an 8-bit image, the image itself is going to be the limiting factor on how many tone values show up, but with a 12-bit image file, it's the display that is the limiting factor. Further, if the display is standard dynamic range (7ish stops) and the file its displaying is 10 or 12 stops of luminence the display is going to have to interpret the information and squish it into the stops available to it, or its just going to look like garbage, either flat or highly clipped. On a high dynamic range display this is much less of an issue. This is quickly demonstrable by sending a log image to both an SDR and an HDR monitor. On the SDR monitor it will look flat and desaturated, but it will look fine on the HDR monitor.
I could still be entirely wrong about all of this, but this is my best understanding. Here are some of the sources I checked with when writing this up.
One gradient and 8bit goes out of window ....
For retouching work I always work in 16bit psd/psb this very much avoids artefacts and banding in the edit. I work with Medium format files that have an insane about of information that makes them a dream to retouch. File sizes become big IQ4 150 files start at ~900MB
Author needs to consider yesterday’s JPEG is not same as Today’s JPEG.
Also address HEIF formats.
RAW storage on my 1TB HardDrive is a big problem and storage should be addressed.
This was very informative
Yes, I never knew that Nikons have a Yellow channel!
HI! can i print my artwork in 14 or 16 Bits .. there is printer machine doing this ?