There is a lot of conflicting information about how humans perceive the world. Some people say that the human eye cannot perceive more than 60 frames per second, while the abundance of high refresh rate monitors and phones seems to contradict that. So, just how high resolution and how fast is the human eye?
Coming to us from Corridor Crew, a channel known mostly for their "VFX Artists React" series, they break down the science and technology that differentiates the human eye from a digital sensor, specifically the Red Gemini 5K, as it is their most capable camera on hand.
While the tests they do in the video are not entirely scientifically sound, relying a bit too much on a small sample size's subjectivity, they do bring up some of the difficulties in transferring something between a digital format like a camera to a biological format like our eyes. There is also the fact that our eyes are higher "resolution" in the middle of our field of view and lower "resolution" the farther away from the center you get. If our eyes were the same resolution across our entire field of view, they would be 576 megapixels according to Corridor's calculations.
What did you think of the video? Was it helpful at all in understanding the difference between our eyes and a camera? What did you think of their testing methods?
Fun!
Two things I had read about sometime back in the 2000s was that as light levels drop, so does your frame rate. The eye is allowing more time for light to accumulate, and the frame rate can drop down to as low as 4 fps. That's why it's difficult to track anything moving in the dark. Also you are only seeing in black and white, because only the rods have the sensitivity needed for low light.
Another interesting point made was that the human eye can only see between 5-7 million colors. What about all those supposed other colors? They are computational sums in an antialiasing game for image files. The colors don't really exist except for computational purposes.
The comparison of the eye to the camera has been used at least since I began to photograph the eye in 1979, it is amazing. Our body's purpose is to move the sensors which we use to perceive life through time and space to be interpreted by the brain is mind-boggling.
I think this oldie is a good addition to this article.
https://fstoppers.com/opinion/do-your-eyes-see-4k-394587
As it shows things about blind spot etc.
I think the experiments may have missed a few important first points of principle. In that it would appear that the eye vision tests were all conducted using both eyes i.e. a bifocal image gatherer vs a monofocal one.
One test that might illustrate this is a simple depth of field/Bokeh test. In that the closest a single lens camera can get to projecting an image our brains interpret as 3D is by the use of Bokeh.
You can replicate this to some extent if you hold your thumb up about 30cm (1ft) in front of your face and focus on it, the background goes slightly out of focus. Interestingly the image our brains create using the thumb in front of the face produces a more out of focus background with both eyes open.
With my vision, if I close an eye the background appears to come more into focus. As though the individual eye's focus is more akin to infinity and it is the doubling up of both image gatherers (eyes) that draws the brain into gaining close up focus.
So could it be that some of the results overstate the capability of a single eye's specification?
Certainly I don't have anywhere near a 180° angle of view with one eye closed, so the choice of camera lens may have been incorrect to start with.
According to the internet the human eye only contains around 6million cone receptors and 120 million rod receptors. A rod receptor can be triggered by just one photon of light, hence the 800,000 iso night vision (no light photon is off or dark, a light photon is on or light, so night vision lacks colour). Cone receptors may be more akin to camera sensors as they require more light to decide which of the three primary colours to fire of as.
So it could be argued that the human eye is only 6mp in colour vision and that the brain with its control of rapid eye movements uses focus stacking/shifting and bracketing to create a more HDR image. Whether that brain computed image equates to 576mp would then depend on so many control factors, might go to explain why we can all see things a little differently.
It is because your nose is the way for the 180-degree angle and that angle only extends side to side not up and down. The other shortcoming of the camera is it does not know what it is looking at whether it is film or digital.