No, Larger Sensors Do Not Produce Shallower Depth of Field

Most photographers believe that larger sensor sizes produce images with shallower depth of field, but that's not exactly true. 

Before we can fully explain depth of field, let's talk about how a lens works. Light rays reflect off of an object, and a lens can focus those light rays onto a digital sensor. Focusing a lens will allow a single-point source of light at a precise distance to be focused as a single point on the sensor. Everything else in your scene that is closer or further away from your focus distance will create blur circles on the sensor rather than sharp points because those light rays converge before or after the camera sensor rather than directly on it. These blurry circles are called the circle of confusion and the circle of confusion "limit" is the largest the circle can be while still being perceived as a single point by a human viewer. The further away these rays converge from the sensor, the larger blurry areas of light or "bokeh" will be produced. 

What Exactly Is Depth of Field?

Technically speaking, depth of field is determined by what is acceptably sharp by a human viewer, which means that things like resolution, image size, and viewing distance can change the depth of field.

To understand this, imagine that you have a 100-megapixel image file. If you had a 4 x 6 print of this image and you were viewing it at arm's length, you would have a hard time determining exactly what was in focus and what wasn't because the human eye would only be able to perceive about 2 MP worth of detail at this size from this distance. Now imagine you printed the same image the size of a movie screen and could get as close as you wanted to it. From this perspective, you would easily be able to determine what was in focus and what wasn't, which would technically make the depth of field shallower. Camera manufacturers have come up with a standard that assumes that you are going to print the image at an 8x10 inches and view it at 25cm. With these parameters, the circle of confusion limit is .029mm on a 35mm sensor. Anything larger than that will appear blurry. 

Remember that a lens is only able to focus on something at one precise distance at a time. Anything closer or farther away from this exact point isn't technically in focus, it just may appear to be in focus to a human based on how much detail they can perceive. If you had a photograph with unlimited resolution and clarity and you could infinitely zoom in without losing any detail, the depth of field would become shallower as you zoom in because you would easily be able to see what was sharp and what wasn't. 

Smaller Sensors Usually Produce Shallower Depth of Field

Most photographers assume that smaller sensors will produce a deeper depth of field but technically speaking, smaller sensor cameras usually produce a shallower depth of field because they tend to have higher pixel density/smaller pixels on the sensor. The circles of confusion being projected by the same lens will be the same physical size on both a 35mm and micro four-thirds sensor but when you blow up both images for print, the smaller sensor image will need to be enlarged more to produce the same size print because it came from a smaller source. When you blow it up more than the 35mm image, you are also blowing up everything including the circles of confusion and a human would now be able to more easily see what is in and out of focus. 

Remeber how a circle of confusion needed to be less than .029mm to appear "in focus" on a 35mm sensor? On a micro four-thirds sensor a circle of confusion must be smaller than .015mm. 

Imagine if you had a full frame 35mm sensor that was 20 MP and a micro four-thirds sensor (which is 1/4 the size) which also had 20 MP. If you attached both cameras to the same 35mm lens, the full frame sensor would capture the entire scene projected by the lens but the micro four-thirds sensor would capture only the center of the scene. Both images have the same resolution but the image taken with the smaller sensor would be zoomed or cropped in and it would give the viewer an even closer look at all of the details. This would allow the viewer to notice details like precise focus more easily meaning that the smaller sensor actually produced shallower depth of field. Check this out for yourself on any depth of field calculator

In the video above I didn't get too deeply into this because it can get confusing and this phenomenon is very difficult to see unless you have cameras with wildly different sensor sizes and resolutions. The more important bit of information is what exactly is causing changes in depth of field.

The Only Three Things That Affect Depth of Field

1. Changing the focus distance

The only way to change your focus distance is to move your subject or move your camera. As you move your camera further away from your subject, your focusing distance will increase and your depth of field will increase. This occurs because the light rays that are bouncing off of your subject and entering your lens are converging more slowly the further you move the camera away.

2. Changing your focal length

Your lens' focal length is the physical measurement of the distance between where the light rays converge to your camera's sensor. As the lens moves further away from the sensor, the light rays will converge more slowly onto the sensor, which means that light rays will have a tendency to focus further in front of and behind the sensor which creates larger circles of confusion (bokeh) and a shallower depth of field. 

3. Changing the lens' aperture

The final way that we can change our DOF is with the lens' aperture. By stopping down the aperture, you are physically blocking the light rays that are coming from the edges of the lens that would produce the most blurry circles of light on the sensor. Closing down the aperture will create a darker overall image, but will also increase the depth of field.

If you'd like an illustrated example of how each of these changes affects depth of field, this video does a great job of explaining it.

Conclusion

The sensor size itself does not produce shallower depth of field, but bigger sensors will force photographers to move closer to their subjects or to use longer lenses to produce similar fields of view of a smaller-sensor camera. Moving forward and increasing your focal length will both decrease depth of field. 

If you enjoyed this, you may also enjoy my recent video/post debunking lens compression.

Lee Morris's picture

Lee Morris is a professional photographer based in Charleston SC, and is the co-owner of Fstoppers.com

Log in or register to post comments
78 Comments
Previous comments

You where right it should have been 150mm.
but thats still equivalent to a 100mm not a 135mm.

One again it does. If you have a 5 set PT Cruiser and a Van the van can Transport more people. You just might have to buy extra seats.
therefore while they both have the same lower limit (1 Person) but different upper limit (5 vs <9 or more ).

The same is true for sensors.
the upper limit is at ~f0.7 the lower is somewhere at f/32/crop.
So you can always get shallower DoF with a FF then with a MFT. With MF you could but you might have a hart time finding the lens for it.
This is because most FM users are more obsessed with sharpness then with DoF.
Eg. compare Sigma 85mmf1.4 and Fuji 110mmf2.

However no one who shoots with a Hasselblad reads a Blog post about photography basics.

But plenty of people in the decision between MFT APS-C and FF do.

And a clickbait article like this does little to help the confusion about DoF and sensor size.

There are two current MF sensors one with a 0.2 crop factor and the other with a 0.64 crop factor.

For both of those crop factors there is a better alternative on FF.

135mm f2 and a 105mm f1.4

Wich Camera has a 0.2 Crop?
The Fuji GF50X the X1D-50c and the H6D-50c have ~0,76
and the H6D-100c and Phase One XF are ~0.64 (645)
0.2 would be about 18x12cm
However if there is use this:(https://www.bhphotovideo.com/c/product/43912-USA/Rodenstock_160704_210mm...)
its a 42mmf1.1 äq ;)

Haha no I meant 0.8 lol that’s a weird typo not sure how that happened.

“However this is not the DOF perceived by viewer. How come? take a 100MP image and downscale it to 2MP the DoF perceived does not change. Why because we perceive the DoF over the image scale not the pixel size. So you can just ignore different resolutions.”

Yes it does. If you lower the resolution enough, or move far enough away from the print, everything will appear in focus.

No If you have an image of just a Bokeh ball of cause you can downscale it till its just one pixel but thats not the point.
Here look at this image of guitar strings. https://wallhere.com/en/wallpaper/661572
what you call DoF is the schort length in the middle where the Stings appear sharp. And yes the length of that changes with resolution. But what you perceive as DoF is the Angle of the Wedge formed by the out of focus string. This angle changes with F-Stop Distance and Focal length or of you are willing to parametrize it differently: It depends on Angle of view Lens Diameter ans Sensor size. But of cause you can always downscale to a single pixel ;)

You can simplify this even further: Depth-of-field is determined by magnification and the iris. Magnification is determined by distance to subject and angle of view. Angle of view is determined by both the focal length and the sensor size.

If you were to keep the angle of view, distance to subject, and f/stop constant, but vary only the sensor size, you'd see similar photos with different depths-of-field. If you were to keep the focal length, distance to subject, and f/stop constant, but vary only the sensor size, you'd see dissimilar photos with the same depths-of-field.

yes, they are pretty much just choosing to "to keep the focal length, distance to subject, and f/stop constant, but vary only the sensor size, you'd see dissimilar photos with the same depths-of-field." and stating that senor size doesn't matter.

Tony's got it, and I'm afraid this article is wrong. You can only compare depth-of-field if you keep some of the variables the same. So typically you keep the display image size the same (e.g. 8"x10" print), viewing distance the same, visual acuity the same. And you assume the display (print or monitor) is fine enough that the pixel size or print dot size is not interfering with one's ability to spot sharpness. Then when you change the other variables, you find that indeed sensor size matters because smaller sensors need greater magnification to produce the same size print.

Sensor resolution is completely irrelevant unless the sensor is so chunky that the pixel size becomes the limiting factor for being able to determine what is sharp. And that really hasn't been an issue for about a decade and any comfortable viewing distance.

However, if one's definition of depth-of-field is the degree of sharpness when one views an image at 100% resolution on a given monitor display, then different factors come into play. The magnification is now determined by the size of the sensor pixels rather than just the size of the sensor. And for a given number of megapixels, a crop sensor will have smaller sensor pixels than a full frame. But really, viewing images at 100% is for pixel-peepers, so this definition is not very useful.

What exactly is wrong? I agree with everything you've written but I don't know what in the article says any different.

Your Parametrisation is. ;)
Or rather the Idea that one would take a picture with same Focal lengths f-stops and distance with different sensor sizes.

Great companion piece to your lens compression video, Lee. Really like the direction you're going with these breakdown videos. Hope to see more like this.

"but bigger sensors will force photographers to move closer to their subjects or to use longer lenses to produce similar fields of view of a smaller-sensor camera."

that in itself contradicts your title. They are not "forcing" you do anything. You're composing your image w/ the format you desired (e.g. 35mm, 645, 6x7, 8x10). Which you literally just explained how/why format size does matter and how it directly affects DoF.

An image shot on 8x10 will have a MUCH more shallower DOF then one shot on 35mm. The size of the film/senor directly affects the property of the lenses, which will directly affect how you compose your image (or as how you state it "will force photographers to move closer to their subjects or to use longer lenses to produce similar fields of view of a smaller-sensor camera"). That's why people say a bigger sensor gives you shallower DOF.

You're basically just choosing to ignore how the size of the film/senor directly affects lens choice/properties, and stating that everything else affect DoF. When in reality, it's the other way around. The size of you film/senor, directly affects your lens choices, which affects how you compose, which directly affects your DoF. The image maker is the one making that creative choice.

Technically speaking, smaller sensors produce shallower depth of field but in practice, larger sensors do. This video/post explains that.

Lee Morris , thanks for the reply. Just sent you a DM w/ some thoughts.

Oh yeah, thanks to this article I just sold my DLSR and traded it for smart phone. Tiny sensor with 16MP will give me ton of smooth bokeh.

Yes, 50 mm / f 1.4 lens on APS-C gives similar result as 85 mm / f2.0 lens on FF sensor. No doubt, the fiftie will be smaller cheaper and lighter, an possible sharper. But what lens do I need to buy to get same result on APS-C as 85 mm / f 1.2 gives on FF? Leica NOCTILUX-M 50mm F0.95 is the answer! For the price of that lens I rahter buy whole new system on FF. The added benefit of FF is also WIDER field of view which means I can get closer to couple, so I do not need to yell at them at posing. Also indoor realestate is MUCH easyer and cheaper to do... actually MF is the answer if price would not be issue. For the birding, M4/3 is better, for shure.

Yes, mathematically it works out that the depth of field between both formats is the same. However, that's not quite the way it works out artistically when you see the final result. So if you want an apparently more shallow depth of field, then choose a larger format. The converse could also be said.

This article needs to be redone, to much wrong information due to improperly converting MFT lens stats to Full Frame stats. I.E. a MFT 50mm lens with an aperture of 2.8 is the equivalent of a Full Frame lens of 100mm @f5.6. Now most marketing materials from lens and even cameras manufactures like to advertise, wrongly I might add, that a 50mm f2.8 lens on a MFT camera is equivalent to a full frame 100mm at F2.8. When in fact if you are multiplying the focal length of a MFT camera to compare to a full frame lens you must also multiply the aperture by the same to get equivalent results. If you redo the test with the D850 set to F5.6 the GH5 and D850 will have the same field of view and depth of field, with in reason nothing will be perfect.

The reason f stops don’t correlate to dof is because they correlate directly with exposure. It’s silly to say that lens manufacturers are falsely advertising when they are advertising exactly what has been the standard from the beginning.

Nothing about this article is wrong, at least nothing has been pointed out as wrong to me yet.

"The reason f stops don’t correlate to dof is because they correlate directly with exposure." Lets start over, I might have come off a little hostile and apologize. F stops totally correlate with DOF, f stop of 2.8 is going to give you much shallower dof then one of F8. Now it would not correlate with field of view, which is what I assume you meant. The article isn't wrong per-say. However you are comparing apples to oranges when you only convert the focal length unless you are just showing FoV. A MFT camera has a crop of 2.0 which make the math easy when you are looking for equivalency to full frame performance which is what most photographers are familiar with. A MFT camera with a 50mm F2.8 lense with an ISO set to 200 will produce the same results as a full frame camera with a 100mm F5.6 lense with an ISO set to 800.

“A MFT camera with a 50mm F2.8 lense with an ISO set to 200 will produce the same results as a full frame camera with a 100mm F5.6 lense with an ISO set to 800.”

Just because the diagonal of a sensor is half the size of a full frame sensor doesn’t mean the aspect ratio is automatically the same too.
Micro Four Thirds has an aspect ratio of 4:3 while ff 35mm has an aspect ratio of 3:2 (apples to oranges…)

The problem with equivalence is everything else needs to be the same to get the same results and that’s usually not the case.
An F-stop (theoretical value based on focal length) is not the same as a T-stop (measured light transmission). You need the same light transmission to get the same exposure result, so you need to compare T-stops not F-stops for that.
ISO is another problem. The ISO-value you see on the camera is usually not the real ISO value.
Let’s take the Olympus OMD E-M1 Mark II. If you measure the ISO 200 setting you’ll find that it’s only ISO 85
Take the Nikon D850 to compare. ISO 800 on that camera is measured ISO 559
The actual difference in this case between MFT ISO 200 and FF ISO 800 is 2.6 stops instead of the 2 stops you expect based on the camera ISO-settings.

However, the article is not primarily about equivalence I think, it’s about the things that effect DOF and those are correct.

You are right the aspect ratio will be different but you can also set the MFT to 3:2 but that would be gimping the camera. Also T stop is a better measurement then using the standard fstop. Also correct about ISO not really being the same from camera to camera, we will never have a 1:1, even when comparing full frame to full frame, we are only human.

Also yes most of the DOF information is correct expect in the you tube video at just past 7 minutes. The problem there with the DoF is that he used the full frame at F2.8 vs the MFT F2.8, If the D850 was set to F5.6 then the FoV and DoF would have been about dead on, as much as it could be. Instead he blamed the longer lens, which is the theoretically the same length, 50mm MFT is equivalent to 100mm on a full frame. No one seems to disagree with that but most people some how don't think you also need to apply the crop factor to the aperture, when you do. Also to get equal(ish) noise levels you have to square the crop then multiply by the ISO of the MFT.

“The problem there with the DoF is that he used the full frame at F2.8 vs the MFT F2.8, If the D850 was set to F5.6 then the FoV and DoF would have been about dead on, as much as it could be.”

That might depend on the definition of FoV.
Let’s take the one given by Wikipedia:
===In photography, angle of view (AOV) describes the angular extent of a given scene that is imaged by a camera. It is used interchangeably with the more general term field of view.===
https://en.wikipedia.org/wiki/Angle_of_view

F-stop is irrelevant in this case. F/2.8 and F/16 on the same camera with the same lens will generate the same FoV. DoF will be different of cause, but DoF Is not a variable in the given definition.

“…Instead he blamed the longer lens, which is the theoretically the same length, 50mm MFT is equivalent to 100mm on a full frame.”

There is a difference between “equivalence” and “equality”.
He is not wrong, same FoV and same settings (f-stop, shutter speed, ISO) will result in different DoF because of the longer focal length.

“…you also need to apply the crop factor to the aperture, when you do. Also to get equal(ish) noise levels you have to square the crop then multiply by the ISO of the MFT”

All true if the subject is (full) equivalence, but I don’t think it was.
You can also argue that using an 85mm f/1.2 on ff at f/1.2 and iso 100 is impossible to replicate on MFT because you need a 42.5mm f/0.6 lens at iso 25 to get the same shot, so sensors size does matter in some cases.
The best you can hope for is that these kind of articles and the discussions in the comments will give readers a better understanding of the differences between formats.

"smaller sensor cameras usually produce a shallower depth of field because they tend to have higher pixel density/smaller pixels on the sensor"

Good article. However the quote above is misleading: https://www.dpreview.com/forums/post/61221058

Cheers,
Jack

Agreed. I'm kind of jumping between the "standard" definition of dof and the true definition. I'm going to do another video/post soon and clarify this further.

Great post, i've tried explaining dof between full frame and crop before and just confused the matter. I'll send them to this article next time! :)

I know I am a bit late to the party on comments, but when I came across this, I have to say I found the title rather misleading.

When doing a photographic test (or just about any scientific test), if our test variable is the sensor size, then to test it methodically, we need to keep other things constant, such as the same subject distance, same exposure, same aperture f-stop value, same sensor total MP, and same image composition, and just change sensor size. Given these test conditions, then the image on the larger sensor WILL have a shallower depth of field.

The explanation is simple. In order to keep the same composition, the focal length of the cropped sensor lens will be less than the FF sensor lens focal length. As the physical size of the 'aperture opening' is given by the focal-length divided by f-stop-value: eg f/2.8, the longer focal length lens used on the FF sensor camera will have a physically larger opening, leading to larger circles of confusion at the sensor, and therefore reduced depth of field.

This is covered by your video, but is potentially made confusing by the title. Perhaps the title would be more educational if it read "How sensor size affects depth of field".

Please do not misunderstand, as the article and video provide useful information, and your efforts in making the video are appreciated.

It doesn't matter how scientific or technical you get. To me, my full frame cameras (D750,D700) produce better images than my non FF cameras. The focused and out of focus areas look more natural, pleasing and similar to how the human eye sees things.

When the human eye focuses on a person or car, the entire car or person is not sharp. Maybe only a part of their face is or their eyes. With iPhones and APS-C etc cameras the DOF looks fake because the whole person or object is in focus. It ends up looking like the 1st generation of "Portrait mode" on smartphones. This is the major reason why I haven't jumped on the whole mirrorless bandwagon yet. Some of the photos from mirrorless APSC cameras just look like iPhone photos to me. Yes, I know they have FF mirrorless now, still keeping any eye on battery life, price etc.

It sounds like you are just trying to justify your APS-C purchases which is cool. To each their own.

My experience: Shooting for 20 years. 16 years professionally.