For many centuries, scientists fought vehemently about the nature of light. Two sides debated a question pivotal to the development of physics: is light a particle or a wave? It wasn't until the 20th century that one of the most startling revelations about our universe came to prominence: light is both.
We see the wave-particle duality in photography every day. Exposure is governed by the particle nature of light, namely how your shutter speed and aperture dictate how many photons reach your sensor and how your ISO dictates how sensitive the sensor is to those photons. On the other hand, the wave nature of light determines such characteristics as color (my girlfriend is particularly fond of a certain wavelength around 490 nm).
One particularly fascinating property of the wave-like nature of light is diffraction. When light passes through an opening, it bends near the edge. The smaller the opening, the more pronounced the bending. This is an issue to photographers because light rays that are bent and separated are no longer focused when they reach the sensor. As a point of light passes through a lens, it should be focused to a point of light on the other side; when it isn’t, it creates the aptly named circle of confusion, which contains all the rays of light corresponding to that point and is a measure of the maximum allowable error given our eyes’ ability to resolve detail. This isn’t all, though. Light that is bent can now be out of phase and will interfere with itself, creating a regular pattern of cancelled and amplified luminosity, called an Airy Disk, named after its discoverer.
However, diffraction is not always an issue. If the diameter of the Airy Disk is relatively small as compared to the circle of confusion and pixel size of the camera, diffraction isn’t an issue. The pixels are simply too big to catch the relatively small error. On the other hand, when the size of the Airy Disk is of the order of the circle of confusion or the pixel size, diffraction becomes noticeable.
There are two ways to increase the effect of diffraction: reduce the size of the hole the light must pass through (i.e. close the aperture) or increase the ability to resolve fine details (i.e. reduce the size of individual pixels). This results in a tradeoff: higher resolution cameras with smaller, more tightly packed pixels have less leeway in their ability to tolerate smaller apertures. Thanks to anti-aliasing filters and the Rayleigh Criterion (which states that two disks must be closer than half their width to become unresolvable), Airy Disk diameters can typically be between 2 and 3 times the diameter of an individual pixel before issues arise.
The Canon 5DS
When Canon first announced the 5DS and 5DS R, with more than double the resolution of the 5D Mark III, I had many thoughts ranging from “I’m going to need a lot more RAM” to “those are some small pixels.” In fact, the 5DS has a pixel pitch of 4.14 microns, versus the 5D Mark III, which has a pixel pitch of 6.25 microns. From a purely physics standpoint (advancements in camera technology aside), this means the 5DS will not perform as well in low light (thus, its maximum ISO of 6,400, compared to 25,600 on the 5D Mark III), but of course, the 5DS is truly built for studio and landscape work.
Both studio and landscape photographers demand the utmost level of detail in their work. Often, this means using small apertures to ensure that the entirety of the image is sharp and no detail is lost. With an over twofold jump in pixel count on the 5DS, this could be a potential recipe for diffraction issues. On a 5D Mark III, the size of the Airy Disk begins to exceed the size of the circle of confusion just after f/11. This means the 5D Mark III reaches its diffraction limit at that point, the point at which diffraction begins to become visible when viewing an image at 100% at a typical viewing distance. This is different from the diffraction cutoff frequency, the point at which airy disks completely merge and no amount of stopping down will improve resolution. Think of the space between the diffraction limit and the cutoff frequency as the space of diminishing returns. On the other hand, the 5DS reaches its diffraction limit just before f/8, slightly over a full stop sooner than the 5D Mark III. This might have landscape photographers and those who rely on having a large depth of field worried.
Should you be worried? Absolutely not. There is a key assumption that went into these calculations: we have a perfect lens, in which diffraction is at its minimum wide open. In order to speak only about the effect the camera sensor has on diffraction, we had to remove another variable: the lens. Of course, no lens is perfect. The truth is that even though we have some spectacular modern lenses, no lens is a perfect optical instrument and in practice, lens aberrations always overwhelm the effects of diffraction for the first few f-stops; otherwise, we would never stop down a lens to increase sharpness. All other variables being equal, a 50.6 megapixel sensor will always show more detail than a 22.3 megapixel sensor.
What this does mean is that if you’re someone who is used to balancing depth of field and sharpness reduction from the effects of distraction, you should take a bit of time to recalibrate where you make the tradeoff if you’ve ordered a high megapixel camera. You might find that because of the diminishing returns that begin at a wider aperture, you would prefer to open your lens an extra stop and increase either your subject distance to compensate for depth of field or focus a bit farther out to maintain your hyperfocal distance.