The Smaller the Sensor Size, the Shallower Your Depth of Field

When talking about the differences between full-frame cameras and crop sensors, one of the biggest arguments in favor of full-frame sensors is the ability to produce images with a shallower depth of field. This was always my understanding of the subject as well. But after watching this video, I have seen the error of my ways. As it turns out, if all the variables are the same and the only thing changing is sensor size, the smaller the sensor, the shallower your depth of field.

I'm not going to try and explain all the science and math from the video, because the video does a much better job than I could even attempt. But my biggest takeaway from this video was when thinking about a sensor's crop factor and how that’s used to calculate a lens' equivalent focal length. Most people multiply the crop factor of a sensor by the focal length of a lens in order to get the full frame equivalent. The trick though, is that you need to multiply this crop factor by the focal length as well as the aperture.

The reason why it seems that full-frame cameras have a shallower depth of field has a lot to do with the focus distance needed in comparison to a crop sensor. The example below shows that in order to get the same frame of view on a crop sensor, you need to increase the distance of the subject. This added distance is what increases the depth of field on the crop sensor.

Who here just had their mind blown?

Log in or register to post comments

94 Comments

Previous comments

"With a smaller sensor you will have to magnify the image even more to get the same print size as the full frame."

Or not. You're confusing negatives and sensors.

Craig Marshall's picture

No, I'm not confusing negatives and sensors. I am talking mathematics. Crop factor is not as accurate as magnification. There is a magnification factor of about 1.5, it's that simple.

Actually, Jason is correct. Switching on "crop mode" on those cameras simply crops the photo *in camera*, similar to if you took the photo in Photoshop and cropped the image so that the subject fills more of the frame, giving the illusion of a longer focal length

...all at the expense of overall image resolution.

Felix, different Megapixels on the same SIZE sensor has to do with pixel density.

ie, more pixels per square inch. The sensor size itself stays the same regardless of megapixels, unless of course one is FF and one APS-C. Or even medium format.

Simple explanation is this: Take the FF sensor and decrease the pixel size. Then, according to the concept explained in the video, you have decreased DoF.
But you still have a FF sensor and the same lens. So what just happened?

Jason Vinson's picture

I agree. I didn't think about that when watching the video. I'll have to reach out the creator and see if they have an explanation.

Only subject distance (focus distance) and aperture have an effect on perceived DoF. That's the look.
The rest is just theory on when to consider a circle small enough to be called sharp. And that again depends on viewing size and distance (and your eyesight of course)

Addition: Lenses and sensors just crop in or out and thereby simply scale the image contents, but they don't change foreground-background ratios (which again is relevant for perceived DoF)

michael andrew's picture

To make your point a reality - "percived" must end on a format or medium of being correct? Like a print perhaps. When you include this into the real world then resolution and pixel size in fact do play a part. Do not try and dismiss it, people used to print before computers.

michael andrew's picture

If you consider print viewing distance, size, resolution and circle of confusion then mathmatically that is exactly what you have done. Again mathmatically is not necessarily completely visible and acceptable focus is somewhat subjective.

John Hess's picture

Not changing the resolution per se, changing the enlargement ;)

Daniel Karr's picture

Jason, you're right about changing resolution playing into the equation, and I guess that's kinda my point. Its not so much smaller sensor size that changes apparent depth of field, but smaller pixel size. So a smaller sensor of the same resolution would have a tighter tolerance for depth of field. If you compared a 42MP Full Frame camera with a 16MP APSC camera, the smaller sensor would actually have larger pixels, and that would mean a deeper apparent depth of field(or larger circle of confusion).

John Hess's picture

It's not the pixel size either - it's the smaller circle of confusion.

A FF 50MP given the same lens will shoot a deeper DOF than a 18MP APS sensor given the exact same print

Arturo Mieussens's picture

Yes, but then you will need to amplify the crop image more than the other to produce the same final result, which will also reduce the sharpness and so decrease the depth of field, again on the final image. Nobody looks at the image in the sensor, it's always amplified somehow and the smaller the sensor, the more you have to amplify.

Daniel, while your analogy is accurate, the thing about turning on "crop mode" on an A7, is that it is similar to a *digital* crop -- in that the amount of the sensor used in the resultant photo is decreased, in order to add more "reach" on your lens -- at the expense of resolution.

It's NOT the same as how focal length of the same lens covers the ENTIRE sensor on two different cameras, one full frame and one APS-C.

The distance between a lens element and the sensor is PHYSICALLY DIFFERNET on cameras of varying frame sizes; not true of the Sony A7 when you switch on/off "crop mode."

And so, the article title is actually correct.

Think about it: All else being the same, a 100mm on a full frame becomes a 160mm on a 1.6 crop APS-C. When you *increase* focal length, DOF *decreases* at the same f-stop.

The reason it *appears* shallower on a full-frame -- like in the above example with the plush monkey -- is because you actually need to be PHYSICALLY CLOSER to the subject in order to have it fill the same amount of frame space, thereby decreasing focal distance, which blurs the background more.

Markus Hofstätter's picture

Totally agree, read that to late, would have been better to post my answer here than at the end of the thread.

Thank you, Daniel. I just posted (admittedly in a state of slight frustration). Had I read your post first, I would not have needed to. You are correct.

Your statement disagrees with geometrical optics, iff COC has to be taken differnet (i.e. when mounting it on an APSC camera with the same pixel pitch). Look at the DOF equations of geometrical optics: http://toothwalker.org/optics/dofderivation.html , eq. 13. Your magnification, defined as f/(1-v), f focal length and v object distance doesn't change. As doesn't aperture N. But if you use a different COC C, the numerator changes and therefore DOF, as is reflected in the title. Can you explain why the DOF equation should be wrong in that case? Do you think you need not take a different COC when cropping or mounting on a cropped sensor? Or do you compare the final image using different output sizes?

Tilt Shift's picture

Doesn't a smaller field of view necessitate a shallower depth of field in your example?

Bob Best's picture

IMHO this is the wrong way of thinking. While this is scientifically true, it is very misleading to the junior photographer.

Instead of holding the focal length constant, we should be holding the field of view constant, which is what you need to do if you want to create two images that have the same perspective and cropping. In this case, if you hold aperture constant, the larger sensor will give the shallower depth of field.

Why? Because in order to create the same cropping AND field of view/perspective, you have to place each camera the same distance from the subject and give the larger sensor a longer focal length lens, which decreases the depth of field when compared to a smaller sensor.

Ed Alexander's picture

I thought it was quite funny that he was out of focus for the entire video. Or do need new glasses!

Daniel Karr's picture

yeah, i found that kinda annoying, especially considering the whole video was about focus. Other that that, It was a very informative video, if not a bit misleading.

Smaller pixels have a shallower DOF. Saying that smaller sensors do is a bit misleading as they would have to have the same pixel count as their FF counterpart.

John Hess's picture

Hi everyone - I'm the guy that wrote and appears in the video. This one was in research for a long time and I knew this would be a controversial topic (I remember Tony Northrup's video) so I purposely tried to create physical experiments that would demonstrate the phenomenon - before I wrote it I was one of those guys that says "of course sensor size doesn't matter' when yes it actually does in the opposite way we tend to say.

So to address a few questions that have popped up.

First you have to forgive me tying Pixel size to Circle of Confusion - I didn't say it was exactly the pixel size - but that it is LIMITED to the pixel size. You can't have a CoC that is smaller than the pixel size but the pixel size can be a lot smaller than the CoC. Where this analogy comes from is when I first started out shooting Standard Definition DV video I never had problems with focusing, then switching to HD and I noticed much more easily where the focus was off. If we were to pin the CoC to pixel size, then you can easily visualize how the smaller sensor has a shallower depth of field purely as a thought experiment (although this analogy works really well for video as we have standardized resolutions across sensor sizes).

But in reality - the Circle of Confusion is NOT tied to the pixel size.

The formula for CoC (mm) = viewing distance (cm) / desired final-image resolution (lp/mm) for a 25 cm viewing distance / enlargement / 25

Now generally we don't know what our final image size will be (enlargement) so a lot of people use something close the Zeiss formula which is d/1730 (sometimes d/1500) where d is the diagonal measure of the original image in other words the sensor.

So a full frame camera would have a CoC of 0.029mm. If we do unscientific math we would find a 12MP FF camera to have a pixel width of 0.008mm and a 50MP FF camera to have a 0.004mm pixel width. Both the pixels width are SMALLER than the CoC - so they would have identical DoF.

But what if we keep cropping in - enlarging the photo. As we make the enlargement bigger, or circle of confusion gets smaller (from that equation above). Make it big enough so that the CoC is about 0.006mm - and the 50MP camera will show a blur in the details that might have look tack sharp on a 12MP camera because the 12MP camera can resolve that high.

Regarding the notion that a crop sensor is JUST a cropped version of a Full frame and that DOF doesn't change just because you crop... you have to compare apples to apples. The crop image has to be ENLARGED to match ff image. Going back to our CoC equation - Increase the enlargement, the CoC gets smaller - shallower depth of field.

If you need proof - watch our video - there's evidence of it right there. Some one mentioned that my video was out of focus - I'm a bit soft when I'm front and center blown up big... but compare that to the shot where I'm in the corner. When my image is small, it looks sharp. It's the EXACT same video - I didn't change the focus but when it was small it's sharp, and when it's big is soft.

So consider the wide shot (where I'm small) to be the full frame and the close up to be the Crop Sensor. What looks sharp in the full frame, looks soft in the crop- right there is why smaller sensors have a shallower depth of field. ;)

Regarding this as the wrong way of thinking and that we should compare field of views.... In our video we do - it's the elephant in the room. But the phenomenon occurs and we have to try to understand it. We can't just brush it off as well scientifically it's right but it's wrong... because invariably some optics guy is going to walk into the forum and start pounding at the keyboard about how everyone else is wrong. The problem is he may not explain it well and then we have even more confusion.

So our goal was to put together a piece by piece explanation from the ground up. To do that we have to keep variables constant.

And now if my appeal to logic wasn't enough... here's a link to Zeiss's white paper on depth of field which I consulted throughout the research:

Page 9: From the section " Smaller film format with the same lens" Reducing the size of the film format therefore reduces the depth of field by the crop factor.

http://www.zeiss.com/content/dam/Photography/new/pdf/en/cln_archiv/cln35...

Hi John, thanks for joining the conversation!

First of all, thanks for the many great videos on sensors and lenses so far. I really enjoyed watching those in the past.

In this one there's nothing wrong either.
I think what's a bit confusing to many is WHY the smaller sensor has shallower DoF. It is because a smaller sensor with the SAME total pixel count has to have SMALLER pixels. Therefore the same CoC would cover more pixels on a smaller sensor which results in blur.
If you had a smaller sensor with the same pixel pitch of the larger one, it really would just be a crop with exactly the same DoF. You'll just miss a lot of information outside the frame.

In order to get a similar framing, however - you will have to back out with the camera. And this increased subject distance more than compensates the effect of the smaller pixels. And that's why in general, larger sensors are considered having shallower DoF. Simply because you could stand closer to the subject.

But overall I think the rather confusing subject of equivalency is well explained.

John Hess's picture

The smaller pixels is one way to think about it but it's important to remember that CoC is not tied to pixel size on the sensor. These CoC ideas were around during film which is agnostic to pixel size. The pixel thing was something I didn't anticipate in the discussions.

I came up with an analogy on another comment board. Look at the period at the end of this sentence. It looks like a single dot... That's a circle of confusion, whether it's made by a single pixel or a hundred smaller pixels doesn't matter, it's small enough to be considered sharp. If we zoom in we are charging what we consider sharp, zoom in enough and we'll see it's not a dot but a bundle of pixels, zoom in further and we'll see it's a bunch of carbon atoms!

So basically it's not about how sharp it really is, it's about what we consider acceptably sharp and magnification plays a big role in that.

Agree 100% - as I mentioned in another comment before (quoting myself here, duh!):
"The rest is just theory on when to consider a circle small enough to be called sharp."
What counts is how we view the picture afterwards. (e.g. print size and viewing distance)

Hi John,
I've found your videos on lenses informative I saw the latest one over on SLR Lounge. Even though I now own a DSLR, I'm still stuck in the film world because that what I'm most used to from 35 years. Different film formats have different focal lengths as to what is wide angle, normal and telephoto. I've been doing research on medium format photography and a normal lens for 6x4.5 is 80mm and for 6x7, it is 110mm.

Wouter Oud's picture

Read this section => https://en.wikipedia.org/wiki/Depth_of_field#Relationship_of_DOF_to_form... <= and see under what circumstances a smaller sensor has a shallower depth of field.
So far I understand it's only when a picture is taken from the same distance using the same f-number, same focal length, and the final images are the same size. Which means it's not an equivalent comparison, since the FoV is not the same.

More comments