How far should you sit from your screen? How large can you print your photos? Why are stacked sensors better? There is much more to those pixels than you might think.
Our eyes’ retinas comprise millions of photoreceptor cells, individual light detecting points called rods and cones. Each eye has about 576 million, with rods detecting a grayscale image and cones picking up the color. The cones stop working in low light, so you can’t see that roses are red and violets blue at night. There is also a third type of cell called the photosensitive ganglion cell, which is not involved in vision but in adjusting your iris and circadian rhythm. These parallel the light meter that adjusts the exposure in your camera.
That’s a lot of photoreceptor cells in your eye compared to the equivalent photoreceptors on your camera’s sensor. However, that high resolution is mainly concentrated in a small area in the center of your retina, the fovea, and beyond that, the resolution is not so great in peripheral vision.
You can test this with your eyes. Gradually move closer to your monitor or TV. You will see the individual pixels that make up the picture at some point. However, you can only see those directly in front of your eyes.
The distance where you can see the pixels will vary depending upon whether you have an HD 1080p or a 4K screen. Consequently, the viewing distance should depend upon the monitor you are using. Sit too far away, you can’t resolve all the details in the image, too close and you will see the pixels.
For a 1080p HD screen, the viewing distance should be about three times the screen’s height.
I’m typing this using a 24” HD monitor, so the screen’s height is about 11.8”. Therefore, I ideally need to sit approximately 35.4” from the screen. For a 4K monitor, I should be 1.5 times the screen’s height, 17.6”, away from the screen.
For an 8K monitor, we need to sit closer still to resolve all the details. If my screen were the same size as I have now, I would need to be only 9” from the screen to resolve all the detail. However, I would not be able to see the entire screen at that distance. Consequently, that resolution would be lost on me. Before you rush out to buy the latest 8K TV or monitor, you might want to consider how far from it your chair is and, therefore, how big the screen should be. Otherwise, you won’t get the full benefits of that resolution.
Those measurements are approximations to illustrate a point. My screens are wall-mounted on extending brackets, and I move my office chair around. Consequently, I am never an exact 34.4” from the screen. Furthermore, it also assumes we have perfect eyesight. As we get older, most of us suffer some degradation of vision, not just of resolution, but in dynamic range too.
I typically use 300 dpi, or dots per inch, for printing. That means a 1” x 1” square would have 300 x 300 = 90,000 dots, far more than your eyes can perceive. Accordingly, the image looks sharp. If we reduced that to 85 dots per inch, you would see those dots; the image would look pixilated. If you are old enough to remember the newspapers and comics where the pictures consisted of tiny dots, that was the resolution most offset presses used. Yet, like your computer monitor and TV, the images were supposed to be observed from a reading distance, so the pictures appeared well defined.
If you scanned that newspaper picture and then printed it at a bigger size, those dots would appear bigger and further apart, so you would need to stand further back to distinguish the details. The same happens with low-resolution photographs. If you try to enlarge it too far, the image becomes pixilated and appears soft. Take a few paces backwards and the image shrinks in your field of view. It seems sharp once again. This is worth knowing. If you have a blurry photo that you want to share, it will appear sharper if you reduce it in size.
The printers of billboards know this. That is how they produced enormous prints of images from cameras with far lower resolutions than are available today. People driving past them would not be getting that close and, consequently, could not see the pixels.
So, how many pixels do we need to print an image to hang on our wall?
According to an old chart on the B&H website, a 10-megapixel camera can print a 20” x 30”. However, on the Whitewall blog, from 10 MP upwards, they can print to the maximum size of 106” x 71” (270 x 180 cm). That makes a mockery of the whole race for ever more pixels. Many of us would be better suited to lower resolution cameras with a lower pixel density. That would mean each photodiode — light receptor — on the sensor would be larger. Thus, it could gather more photons, so the signal to noise ratio and the dynamic range would be greater.
The new stacked sensors, such as the one found in the new Sony Alpha 1, the Nikon Z 9, the Canon R3, and the OM System OM-1 are far more efficient. Put very simply, on traditional sensors, the millions of photodiodes that collect the light sit alongside their associated transistors that process the resulting electrical signal. On a stacked sensor, the transistor sits below the photodiodes. Therefore, each photodiode can use that space and be much larger.
This means the stacked sensor is more like the retina in your eye, where the bipolar cells and the ganglion cells, that act like the transistor, sit behind the rods and cones.
This new technology also allows much faster shooting. The Z 9 and the Alpha 1 can achieve 20 uncompressed raw frames per second (fps), the R3 achieve 30 raw fps, while the OM-1 can shoot up to a blistering 120 fps of uncompressed raw files; a benefit of the smaller sensor.
Going back to the light receptors in your eye, the color-detecting cones are concentrated on the fovea. The rods work better in low light. They are concentrated more on the periphery. Therefore, you can see things out of the corner of your eye at night that you cannot see when you look directly at them.
There are three different types of color-detecting cones. L-cones detect long-wavelength red light, M-cones detect medium wavelength blue light, and S-cones are sensitive to short-wave green light. There are about as many green cones as red and blue together.
That mix of two parts green to one part red and one part blue is duplicated on the sensor in your camera.
Each photodiode has a light filter that absorbs light at one range of light and reflects it at others. As the more numerous green filters will reflect red light, your sensor will appear to be more of a reddish hue.
I hope you found that interesting. Understanding a bit about how those microscopic dots work can make a big difference to how we work with our photos. Perhaps you have some helpful information relating to resolution, sharing images, and printing that you can share with me. Please do so in the comments below.