No, Larger Sensors Do Not Produce Shallower Depth of Field

Most photographers believe that larger sensor sizes produce images with shallower depth of field, but that's not exactly true. 

Before we can fully explain depth of field, let's talk about how a lens works. Light rays reflect off of an object, and a lens can focus those light rays onto a digital sensor. Focusing a lens will allow a single-point source of light at a precise distance to be focused as a single point on the sensor. Everything else in your scene that is closer or further away from your focus distance will create blur circles on the sensor rather than sharp points because those light rays converge before or after the camera sensor rather than directly on it. These blurry circles are called the circle of confusion and the circle of confusion "limit" is the largest the circle can be while still being perceived as a single point by a human viewer. The further away these rays converge from the sensor, the larger blurry areas of light or "bokeh" will be produced. 

What Exactly Is Depth of Field?

Technically speaking, depth of field is determined by what is acceptably sharp by a human viewer, which means that things like resolution, image size, and viewing distance can change the depth of field.

To understand this, imagine that you have a 100-megapixel image file. If you had a 4 x 6 print of this image and you were viewing it at arm's length, you would have a hard time determining exactly what was in focus and what wasn't because the human eye would only be able to perceive about 2 MP worth of detail at this size from this distance. Now imagine you printed the same image the size of a movie screen and could get as close as you wanted to it. From this perspective, you would easily be able to determine what was in focus and what wasn't, which would technically make the depth of field shallower. Camera manufacturers have come up with a standard that assumes that you are going to print the image at an 8x10 inches and view it at 25cm. With these parameters, the circle of confusion limit is .029mm on a 35mm sensor. Anything larger than that will appear blurry. 

Remember that a lens is only able to focus on something at one precise distance at a time. Anything closer or farther away from this exact point isn't technically in focus, it just may appear to be in focus to a human based on how much detail they can perceive. If you had a photograph with unlimited resolution and clarity and you could infinitely zoom in without losing any detail, the depth of field would become shallower as you zoom in because you would easily be able to see what was sharp and what wasn't. 

Smaller Sensors Usually Produce Shallower Depth of Field

Most photographers assume that smaller sensors will produce a deeper depth of field but technically speaking, smaller sensor cameras usually produce a shallower depth of field because they tend to have higher pixel density/smaller pixels on the sensor. The circles of confusion being projected by the same lens will be the same physical size on both a 35mm and micro four-thirds sensor but when you blow up both images for print, the smaller sensor image will need to be enlarged more to produce the same size print because it came from a smaller source. When you blow it up more than the 35mm image, you are also blowing up everything including the circles of confusion and a human would now be able to more easily see what is in and out of focus. 

Remeber how a circle of confusion needed to be less than .029mm to appear "in focus" on a 35mm sensor? On a micro four-thirds sensor a circle of confusion must be smaller than .015mm. 

Imagine if you had a full frame 35mm sensor that was 20 MP and a micro four-thirds sensor (which is 1/4 the size) which also had 20 MP. If you attached both cameras to the same 35mm lens, the full frame sensor would capture the entire scene projected by the lens but the micro four-thirds sensor would capture only the center of the scene. Both images have the same resolution but the image taken with the smaller sensor would be zoomed or cropped in and it would give the viewer an even closer look at all of the details. This would allow the viewer to notice details like precise focus more easily meaning that the smaller sensor actually produced shallower depth of field. Check this out for yourself on any depth of field calculator

In the video above I didn't get too deeply into this because it can get confusing and this phenomenon is very difficult to see unless you have cameras with wildly different sensor sizes and resolutions. The more important bit of information is what exactly is causing changes in depth of field.

The Only Three Things That Affect Depth of Field

1. Changing the focus distance

The only way to change your focus distance is to move your subject or move your camera. As you move your camera further away from your subject, your focusing distance will increase and your depth of field will increase. This occurs because the light rays that are bouncing off of your subject and entering your lens are converging more slowly the further you move the camera away.

2. Changing your focal length

Your lens' focal length is the physical measurement of the distance between where the light rays converge to your camera's sensor. As the lens moves further away from the sensor, the light rays will converge more slowly onto the sensor, which means that light rays will have a tendency to focus further in front of and behind the sensor which creates larger circles of confusion (bokeh) and a shallower depth of field. 

3. Changing the lens' aperture

The final way that we can change our DOF is with the lens' aperture. By stopping down the aperture, you are physically blocking the light rays that are coming from the edges of the lens that would produce the most blurry circles of light on the sensor. Closing down the aperture will create a darker overall image, but will also increase the depth of field.

If you'd like an illustrated example of how each of these changes affects depth of field, this video does a great job of explaining it.

Conclusion

The sensor size itself does not produce shallower depth of field, but bigger sensors will force photographers to move closer to their subjects or to use longer lenses to produce similar fields of view of a smaller-sensor camera. Moving forward and increasing your focal length will both decrease depth of field. 

If you enjoyed this, you may also enjoy my recent video/post debunking lens compression.

Log in or register to post comments

78 Comments

Previous comments

The question is what else you keep equal. For the parameters that photographers care about (subject distance, angle of view, f number) then using a bigger sensor does lead to shallower depth of field, as the dpreview link posted by Trevor demonstrates empirically. Sure, if you look at the physical size of the aperture in mm that's a different story, but all photographers I know reason in terms of f number, not physical aperture diameter.

Also see this Stanford applet to explore the geometry of depth of field in the case of thin-lens approximation. https://graphics.stanford.edu/courses/cs178-10/applets/dof.html

Krzysztof Kurzaj's picture

Well, yes and no.
To say that larger sensors produce shallower depth of field is pretty much on the same accuracy level as saying that higher ISO is producing a more exposed image. If shutter speed and aperture remain the same then increasing ISO will definitely result with more exposed image. In the same fashion if focal length and distance from the photographed subject remain the same, use of bigger sensor will result in shallower dept of field. But if we start stipulating that the other parameters can be adjusted then none of those statements may be true.

So indeed larger sensor itself will not do the magic and author explains very well why but this does not change the fact that given availability of the lenses and other practical aspects, a portrait photographer will grab a FF DSLR over M43 mirrorless.

Ann Barber's picture

I can mostly follow this, a little over my head... But asking myself while reading - WHY'S would this matter to me as an amateur? Then watching the video illustrate the points so well, I actually realized in the side by side examples... there are some images that to me were way more preferable - brighter, sharper, more interesting. Now to watch again and figure out which settings or considerations (distance, lens) got those results.

I’ve never seen Fstoppers go so low.
This article should be title « how to compare apples and oranges to get a good clickbait ». This is excessively disappointing from Fstoppers.
For the same perspective and FOV, at identical f-stop and print size, a larger sensor will produce shallower DOF. That’s what matters to photographers.

So if with smaller sensors you have to move farther away to keep the same field of view which incresses the distance to the focus plane which increases the dof, then isn't the sensor size what triggered everything after all? You had to move because of the sensor size... :)

If your shooting with a medium format camera it will have a longer focal length lens as its standed lens ie Pentax 67 105mm 2.4 were a 35mm digital or film camera has a standard lens of 50mm.
So if yow were to shoot the same shot with each camera with there standard lens at the same distance and f stop. The Pentax would have a shallower depth of field.
This is due to having a longer focal length lens as it’s standers lens. So by using a larger format with there respective standard lenses you would naturally get a shallow depth of field.

So I am saying a larger sensor has a shallower depth of field or am I missing something

Alexis Cuarezma's picture

No, you're 100% right. Shoot 8x10 and the DOF is still super shallow at F16.

revo nevo's picture

If you would like to have wider angle with shallow DOF you need to get bigger sensor.
You have to change your focus distance because of sensor size.
I don't really have to know why since it does not matter really. The conclusion is still the same.

Next topic idea :D

-perspective distortion and sensor size

Rick McEvoy's picture

I just use a full frame DSLR - other sensor sizes dont affect me as I really don’t understand this stuff but thank you for giving me increased understanding of this.

Rick McEvoy

http://rickmcevoyphotography.com

Rick McEvoy's picture

Thank you for helping me understand this issue much better than I did before. Being a simple chap I just use full-frame so dont need to worry!

Regards

Rick McEvoy - http://rickmcevoyphotography.com/

Just No!

Allow a Physicist specialized in optics clarify a few points.

1st Your definition of DoF is used as a measure of the zone of Pixel level (100%) sharpness. So when you want to shoot a Landscape and you want the Flower in the foreground and the Mountain in the background to be pixel perfekt sharp then tis measure applies.
However this is not the DOF perceived by viewer. How come? take a 100MP image and downscale it to 2MP the DoF perceived does not change. Why because we perceive the DoF over the image scale not the pixel size. So you can just ignore different resolutions.

2nd. while Your explanation of how DoF changes with Distance F-Stop and Focal length changes is right. Your mistake is to assume that they don't change with Sensor size. What does this mean?
When you shoot a Portrait with a MFT and a Phase one you will not shoot both with the same Aperture and focal length at different distances the reason for that you explained your self in the Lens compression Video.
You are much more likely to shoot at the same distance while you could still shoot with the same sense and just crop the Phase One to MFT that would defeat the purpose of shooting with the Phase one in the first place. So you are going to use different lenses. You then have to apply crop factors to the focal lengths to keep your framing. (e.g. a 42.5mm and 132mm as 85mm äq.) Now different rules apply to get the same DOF namely multiply crop with F-Stop and you are fine. But when the 130mm is f2 you need a 42mmf0.64 to get the same image! good luck finding that lens.

So does Sensor size change DoF well not directly but it very much forces you to do so!

Best regards

Usman Dawood's picture

So, you essentially agree with the article then lol.

It's not the sensor directly causing the change but other factors that are applied.

Just Yes!?

Thats just like saying more passenger seats don't make a plane bigger. Its the designer who has to make room for the seats. Utterly useless semantics.

Usman Dawood's picture

That's correct though, more passenger seats do not make the plane bigger there is an upper limit. Also, how exactly does your analogy fit there? A very strange analogy to use.

Sensor size does not impact DOF in any way (COC aside) if you're discussing the actual physics of how things work. What you're doing is mixing up two discussions, the physics of something and the creative decisions applied, They are different points, however, they work together in many situations.

Understanding what causes something properly is a much more effective way to learn. Saying larger sensors produce shallower DOF is not only incorrect but also a half-measure and a very lazy way of explaining things.

Well If you want do dive in deep the you would have to understand a whole lot about lens design to understand why it is so hard to make a decent lens with an f-stop < f1.0. And you would have to do the whole calculation to show that with a recliner lens you get the same angle of view when you multiply focal length with crop factor. But that is the easy part. Then you need to do the calculation to show that:
1st. In the case of fixed distance fixed Focal length you get the same DoF (expressed in Image size) with different Sensor sizes if their f-stops * crop are equal.
2nd. in the case of fixed reproduction scale (e.g. face fills image) and varying focal length and sensor size, DoF (expressed in Image size) only depends on f-stop * crop.

But this isn't easy even for people with a good mathematical understanding of optics.

So sometimes we have to live with half knowledge.
But some half knowledge is dangerous because it leads to wrong conclusions.
Telling people that sensor size don't affect DoF is in general just wrong.
Why? you can't get a MFT 42mmf0.7 however you can easily get a FF 85mmf1.4
So a different Sensor size enforces different engineering constraints that result in lower DoF...

This is the same with the airplane seats analogy. the number of desired seats constrain the size of the Airplane but they don't change it per se.
E.g. an Lockheed C-5 Galaxy is a huge plane with few seats. (a Phase one with a f64 lens in the analogy.)

So it is best to stick in a simple half knowlage that leads to little wrong conclusion.
Therefore multiply f-stop and focal length with the crop and the ISO times Crop^2 and you won't go wrong.

The only case where you could get issues with this frame of mind is when you use a Speed booster. but than you just replace crop with (camera crop)*(speed booster Factor) and it works again.

if you stick with Lee Morris explanation you get a lot of confusion just read the Debate on what DoF actually is. While everyone intuitively knows that the DoF of a 85mmf1.4 looks different that the DoF of a 15mmf4 no matter the Megapixels. We have Lee tell us that it does. And his definition of DoF isn't wrong however it is not wrong other is good application for this definition in calculating the Hyperfocal distance and Image stacking. As it is based on Circle of Confusion. However it does not measure simply put the size of Bokeh Balls expressed in image size but in Pixel size.

This is the subtlety you have to deal with if you want the deep understanding.

Usman Dawood's picture

A lot of strawman arguments and red herrings in your points, however, I'll just discuss one of your points which is somewhat relevant.

"Telling people that sensor size don't affect DoF is in general just wrong.
Why? you can't get a MFT 42mmf0.7"

Ok, well a 135mm f2 on full frame produces shallower dof compared to a 150mm f3.2 from Hasselblad. Even if you factor in the larger MF sensor Nikon and Sigma have a 105mm f1.4. How does your point make any sense?

Just because a particular lens doesn't exist doesn't make your point valid there are lots of examples of equivalent lenses available on smaller sensors that produce shallower DOF compared to larger sensors.

I'm sorry but that point you made is complete nonsense.

Well if instead of callings stuff nonsense you would actually read you would know that I gave you the tools:

f2 < f2,048 äq = f3.2*0.64 (Hasselblad crop)
but 150mm*0.64 = 96mm äq < 135mm
So I If you make the subject fill both the frame of the FF and the MF you should get the same DoF however you have to be closer with the Hasselblad.

Ok so what about a 105mm f1.4? that would be 164mm/f2.2 on a Hasselblad however they only have a 100mmf2.2 64mmf1.4 äq
so you need to be closer. But that is youst because they don't build one. There have been wider MF lenses in the past like the Mamya Sector C 80mmf1.9 = 50mmf.12 äq
and If you look at Fuji GFX you already can get the Zhong Yi 85mmf1.2 = 65mmf0,9 and you might see a 80mmf1.4 in the future.

However as available refraction indexes impose a theoretical limit at f0.5 (you can get lower with eg. pure Diamond) the way to shallower DoF is a larger Sensor!

Just read this:
https://en.wikipedia.org/wiki/Depth_of_field#Relationship_of_DOF_to_form...

Usman Dawood's picture

135mm f2 on full frame is 135mm f2

150mm 3.2 on hassy small sensor is around 135mm 2.8 FF (0.2 crop factor)

150mm on larger hassy sensor is around 96mm f2 FF (0.64 crop factor)

105mm is f1.4

I think you might have the numbers wrong on this one.

If Fuji make an 80mm f1.4 that would be pretty amazing though.

Again your point doesn’t make much sense just because a particular lens doesn’t exist plenty of other examples.

You where right it should have been 150mm.
but thats still equivalent to a 100mm not a 135mm.

One again it does. If you have a 5 set PT Cruiser and a Van the van can Transport more people. You just might have to buy extra seats.
therefore while they both have the same lower limit (1 Person) but different upper limit (5 vs <9 or more ).

The same is true for sensors.
the upper limit is at ~f0.7 the lower is somewhere at f/32/crop.
So you can always get shallower DoF with a FF then with a MFT. With MF you could but you might have a hart time finding the lens for it.
This is because most FM users are more obsessed with sharpness then with DoF.
Eg. compare Sigma 85mmf1.4 and Fuji 110mmf2.

However no one who shoots with a Hasselblad reads a Blog post about photography basics.

But plenty of people in the decision between MFT APS-C and FF do.

And a clickbait article like this does little to help the confusion about DoF and sensor size.

Usman Dawood's picture

There are two current MF sensors one with a 0.2 crop factor and the other with a 0.64 crop factor.

For both of those crop factors there is a better alternative on FF.

135mm f2 and a 105mm f1.4

Wich Camera has a 0.2 Crop?
The Fuji GF50X the X1D-50c and the H6D-50c have ~0,76
and the H6D-100c and Phase One XF are ~0.64 (645)
0.2 would be about 18x12cm
However if there is use this:(https://www.bhphotovideo.com/c/product/43912-USA/Rodenstock_160704_210mm...)
its a 42mmf1.1 äq ;)

Usman Dawood's picture

Haha no I meant 0.8 lol that’s a weird typo not sure how that happened.

“However this is not the DOF perceived by viewer. How come? take a 100MP image and downscale it to 2MP the DoF perceived does not change. Why because we perceive the DoF over the image scale not the pixel size. So you can just ignore different resolutions.”

Yes it does. If you lower the resolution enough, or move far enough away from the print, everything will appear in focus.

No If you have an image of just a Bokeh ball of cause you can downscale it till its just one pixel but thats not the point.
Here look at this image of guitar strings. https://wallhere.com/en/wallpaper/661572
what you call DoF is the schort length in the middle where the Stings appear sharp. And yes the length of that changes with resolution. But what you perceive as DoF is the Angle of the Wedge formed by the out of focus string. This angle changes with F-Stop Distance and Focal length or of you are willing to parametrize it differently: It depends on Angle of view Lens Diameter ans Sensor size. But of cause you can always downscale to a single pixel ;)

Tony Northrup's picture

You can simplify this even further: Depth-of-field is determined by magnification and the iris. Magnification is determined by distance to subject and angle of view. Angle of view is determined by both the focal length and the sensor size.

If you were to keep the angle of view, distance to subject, and f/stop constant, but vary only the sensor size, you'd see similar photos with different depths-of-field. If you were to keep the focal length, distance to subject, and f/stop constant, but vary only the sensor size, you'd see dissimilar photos with the same depths-of-field.

Alexis Cuarezma's picture

yes, they are pretty much just choosing to "to keep the focal length, distance to subject, and f/stop constant, but vary only the sensor size, you'd see dissimilar photos with the same depths-of-field." and stating that senor size doesn't matter.

Tony's got it, and I'm afraid this article is wrong. You can only compare depth-of-field if you keep some of the variables the same. So typically you keep the display image size the same (e.g. 8"x10" print), viewing distance the same, visual acuity the same. And you assume the display (print or monitor) is fine enough that the pixel size or print dot size is not interfering with one's ability to spot sharpness. Then when you change the other variables, you find that indeed sensor size matters because smaller sensors need greater magnification to produce the same size print.

Sensor resolution is completely irrelevant unless the sensor is so chunky that the pixel size becomes the limiting factor for being able to determine what is sharp. And that really hasn't been an issue for about a decade and any comfortable viewing distance.

However, if one's definition of depth-of-field is the degree of sharpness when one views an image at 100% resolution on a given monitor display, then different factors come into play. The magnification is now determined by the size of the sensor pixels rather than just the size of the sensor. And for a given number of megapixels, a crop sensor will have smaller sensor pixels than a full frame. But really, viewing images at 100% is for pixel-peepers, so this definition is not very useful.

What exactly is wrong? I agree with everything you've written but I don't know what in the article says any different.

More comments