Resolution, bit depth, compression, bit rate. These are just few of the countless parameters our cameras and files have. Let's talk about bit depth here. There's a lot of good talk about 10 bit and a lot of bad talk about 8 bit. The computer can tell the difference, but can you?
What Is Bit Depth?
Bit depth determines the number of colors that can be stored for an image whether it's a still picture or a frame from a video footage. Each image is composed of the basic red, green, and blue channels. Each channel can display a variety of shades of the appropriate color. The number of shades determines the bit depth of the image. A 1-bit depth image means there are only two color shades per color channel. For a 3-bit depth image there are two to the power of three shades, or a total of eight shades per channel. An 8-bit image means there are two to the power of eight shades for red, green, and blue. This is 256 different values per channel. When combining those channels we can have 256 x 256 x 256 different color combinations, or roughly 16 million. A 10-bit image can display 1,024 shades of color per channel, or billions of color combinations.
Don't get confused with the 24-bit color. A color is represented by these three basic channels (excluding the alpha channel as we just talk about color, not transparency). The color bit depth is the sum of the bit depths of each channel. A 24-bit color means each color channel can have 8-bits of information.
What's the Bit Depth of Most Media Devices?
The majority of displays on the market are displaying images with 8-bit depth, whether these are desktop monitors, laptop screens, mobile device screens, or media projectors. There are 10-bit monitors too but not many of us have those. If you are curious: the human eye can recognize about 10 million colors.
What's the Point of Using 10-Bit Images?
As we see, neither our eyes, nor most of our displays can show us the glory of the 10-bit images. What's the point of having so much data we can't see? For displaying there's no use at all. Even if the devices can interpret that vast amount of data, our eyes won't tell the difference. The only advantage is when processing that data. If you have an 8-bit image and you want to stretch the saturation or contrast more evenly for some reason, the processing software may not have enough data and "tear" parts of the histogram. As a result blank bars of missing data are formed. If there is more dense data to work with, expanding the range would not cause such gaps.
As a result we have the so called "banding" where on the right it is the original gradient and on the left is the "stretched" color spectrum:
This is the reason why it's so important to be more precise when shooting 8-bit images (like JPEG) or 8-bit video (like most of the DSLRs do). Being more precise usually won't call for heavy post-processing. At the end you will have a quality result. If heavy processing is required this is where the 10 or more bits per channel show their advantage. When stretching the values of the pixels the software will have lots of data to work with and thus produce a smoother result of high quality.
Working with 8-bit still images or 8-bit video footage is not bad unless you plan to do a vast amount of color or contrast changes. Being a precise shooter is always paying off, but there are times when you might need higher bit depth (or "deeper bit depth") files.
Raw still images are files of 12, 14, or 16-bit depth. Now you know why you can change the white balance or work with saturation, vibrance, and contrast without degrading the quality than applying changes over 8-bit JPEG files. It is the same for video. Most of your DSLR video is 8-bits per channel and you have to make your picture as best as possible in-camera, otherwise post-processing may lower the quality of your final product.
For more great tech related tips, go to ThioJoeTech's YouTube channel.