The Smaller the Sensor Size, the Shallower Your Depth of Field

When talking about the differences between full-frame cameras and crop sensors, one of the biggest arguments in favor of full-frame sensors is the ability to produce images with a shallower depth of field. This was always my understanding of the subject as well. But after watching this video, I have seen the error of my ways. As it turns out, if all the variables are the same and the only thing changing is sensor size, the smaller the sensor, the shallower your depth of field.

I'm not going to try and explain all the science and math from the video, because the video does a much better job than I could even attempt. But my biggest takeaway from this video was when thinking about a sensor's crop factor and how that’s used to calculate a lens' equivalent focal length. Most people multiply the crop factor of a sensor by the focal length of a lens in order to get the full frame equivalent. The trick though, is that you need to multiply this crop factor by the focal length as well as the aperture.

The reason why it seems that full-frame cameras have a shallower depth of field has a lot to do with the focus distance needed in comparison to a crop sensor. The example below shows that in order to get the same frame of view on a crop sensor, you need to increase the distance of the subject. This added distance is what increases the depth of field on the crop sensor.

Who here just had their mind blown?

Log in or register to post comments

94 Comments

Previous comments
John Hess's picture

Right, its not an equivalent which is why we need the crop factor to determine lens equivalent.

Wouter Oud's picture

So practically speaking, in every day use, a bigger sensor will have a shallower depth of field in comparison to a smaller sensor.

Arturo Mieussens's picture

Just a longer way of saying the same old thing. Depth of field depends on the aperture and the magnification -the relation between the object size and the size on the sensor- , and the magnification depends on focal length, distance to subject and sensor size. That's why when you move the cameras so you have the same field of view, you have exactly the same image with the same aperture (and iso).

Tony Carter's picture

LOL...next week on FStoppers: "Sales of APS-C cameras skyrocket!" ;)

How does fstoppers work by the way, can the authors just upload an article or is it checked by other authors or a senior or something?

Like someone says, this is wrong info and may put newcomers to make wrong decisions. This article should be removed.

When I read the title of the article I thought, "That's backwards!" Then I read the article and I realized the author is confused and writing about things he doesn't understand. After reading the comments and his replies I am even more convinced of that.
Who edits these articles for fstoppers? This is a great example of misinformation on the web.

John Curlett's picture

I think it would have helped if the author had made more of a point that he is departing from the classical, textbook definition of depth of field which doesn't consider sensor resolution. It is true that the perceived sharpness of the final image is affected by both the resolution of the sensor and the resolving power of the lens which he did not mention. These factors have an overall effect on the sharpness of the image which combined with the DOF dictate the range of distance where the image is acceptably sharp. This final result could be called "effective" or "practical" depth of field as not to confuse the reader with the classical definition of DOF. There is already more than enough confusion when it comes to DOF.

John Hess's picture

This IS the TEXTBOOK definition of Depth of Field. Circle of Confusion is not tied to pixel size but it works as an analogy.. Given an infinitely sharp lens and an infinitely sharp sensor, the smaller sensor will have a shallower depth of field given the same lens

John Curlett's picture

My original post was trying to help you but you apparently didn't see it that way. Making a statement like the one in your response just confuses people. Remember that the textbook definition of DOF as it applies to photography refers to what a person with normal eyesite will deem to be in acceptable focus when viewing an 8X10 print from about 1 foot distance.

For example, one takes pictures of the same subject at the same distance and f-stop with full frame and crop sensor cameras. If the resulting files are used to create identical pictures (meaning same framing of the subject) then there will be no difference in the DOF between the two prints.

If, on the other hand, one wants to use the entire sensor image and have the same framing in both pictures then they must increase the distance to the subject when taking the picture with the crop sensor camera. In this case when prints are made from full sensor images, the one shot with the crop sensor camera will have a GREATER DOF. This is due to the fact that the DOF increases by the square of the distance to the subject while the loss in DOF from the increased enlargement of the crop sensor image is linear with subject distance creating an overall increase in DOF.

This is the reason why folks say the crop sensors provide greater DOF because they are referring to the final result using the full sensor image. This is the desired situation for optimum image quality and part of "getting it right in the camera".

John Hess's picture

Thanks for clarification of your intent ;) I don't think there's factually anything we disagree on. If you watch the rest of the video, all of it was addressed.

But the confusion as I see isn't in my statement of the fact, but in the improper application of a rule of thumb. Thinking the sensor causes shallower depth of field leads to long comment discussions like this one where other folks are arguing over things they don't understand. If you're going to try to understand lens equivalents, why half ass the explaination?

John Hess's picture

Also to nitpick, changing distance will get subject size identical, but it will change the perspective so its not really a lens equivalent. I was taken to task for not mentioning that on YouTube.

Jacopo Pregnolato's picture

I think this video explains it https://www.youtube.com/watch?v=f5zN6NVx-hY

edit: is the video mentioned by the author of the video in the comments

Tim Foster's picture

"Equivalent focal length" means nothing. Crop factor is a concept invented to help photographers transition from the 135 frame to the newly developed, smaller digital sensors that were available when DSLRs first hit the market. I'm not sure what the point of this article is.

Daniel Lee's picture

Please at least adjust the title to mention focal length equivalents. Ex. https://www.slrlounge.com/depth-of-field-and-lens-equivalents/

Oops, the video in the article got it wrong.

The instructor has as many before him fallen into the trap of changing two or more variables and saying the resultant is down to changing one variable. He changed sensor size, PLUS pixel density PLUS print magnification and attributed the result to just sensor size.

So lets try a really simple experiment. Make a 10×8 from a FF size sensor, now cut the print down to 8×6. This is about the same as going from a FF sensor to a crop sensor. So all we have changed, in effect, is the crop factor of sensor. Pixel density remains the same AND so does the print magnification. Has the DOF changed? No, it remains exactly the same!

So if we now blow up our cropped 8×6 print to 10×8 and compare it to the original uncroped 10×8 has the DOF changed. Yes, as we now have increased our magnification and so also increased the Circle of Confusion.

Now to compare pixel density, we can say thank you to Sony for the wonderful A7 range. They have 3 FF cameras with different pixel densitys. A 16MP, a 24MP and a 42MP. Will there be a different in apparent DOF between them. Yes there is.

The video did say that was comparing the same size prints (10x8) for each system and that most crop sensor cameras have higher pixel density, so why not also include the last variable and include changing the focal length of the lens to compensate for the change in angle of view for the different size sensors? So now a 50mm lens on a FF camera is about 47° and on a crop sensor camera we need about 35mm to give us the same 47°. If we shot at F8 this means that our taking aperture is 1/8 of the focal length so our FF camera has an a taking aperture of 6.25mm (for the 50mm) and its about 4.3mm (for the 35mm) on the crop sensor camera. And as we know a smaller aperture give us a larger DOF.

So the result of all of this is that changing the sensor size on it own does NOT change the apparent DOF, it is only when we change another variable to compensate for the smaller sensor that we get a change in apparent DOF.

Thomas

John Hess's picture

Pixel density has only a tennuos relationship. Don't make the mistake of tying CoC to pixel depth. A FF 12MP camera has the same DoF as a 50MP camera. The difference is only in the enlargement.

And since when would it make sense not to compare the same size print? Do you print out MFT pictures at half the size to compare to that of a FF camera? Does a MFT camera shoot half resolution HD video of its FF brother? Of course not, the only reason is to support your argument that they shoot the same DoF but youre introducing a unrealistic variable.

Hi John, thanks for getting back to me. As for the same size print, I think you should only change one variable at a time and then look at the result. Otherwise, the correct conclusion would be "The Smaller the Sensor Size PLUS a greater image enlargement to give you the same size print, the Shallower Your Depth of Field".

As for introducing an unrealistic variable. No, I am reducing the variables to just one variable at a time. Your approach is:

Change A plus change B gives you C. Therefore, change A give you C.

My approach is:

Change A. Does that give you C? Answer: No
Change B. Does that give you C? Answer: Yes

Therefore, change B gives you C.

Is my approach correct?

John Hess's picture

No your approach is not correct because you're assuming that final print size isn't a variable.

If you take a print and cut out a smaller portion of it you haven't changed the enlargement, but you have changed the proportions. And circle of confusion (and DoF) is defined by a certain sized print size viewed from a certain distance with a certain enlargement from the original (sensor size).

So by definition when you cut the image - you have changed a variable.

I think we both agree on fundamental details of this subject. The fact is image size and enlargement are tied together - make a change in one and the other must change as well - you can just isolate one variable. Starting with equal enlargement holds no more mathematical purity than starting with equal print sizes. But starting with equal print sizes is much more common place. In video, large sensors shoot the same size image as small sensor cameras. When people walk into Costco to print their photos they select the print size, not the magnification from their sensor. Lastly, holding equal print sizes lets up discuss lens equivalents - which if we just held magnification constant - would be pointless because all lenses would be equivalent on all sensors.

Alexander Roan's picture

If I am understanding this correct I think I knew it because I always think about distance of camera to subject and subject to bokeh'able background.. So while I hadn't really considered it as a factor when choosing full frame vs. crop, it makes sense.

Spencer Bentley's picture

Great video. Its always nice to see the science behind the art. I have a question though: Is image compression a product of the lens, sensor size, or a combo of both? I have a Sony a7M2 and 50mm Zeiss loxia and realized I had an ASP-C option that could effectively make the lens an 80mm equivalency. I would love to experiment with this in portraiture because of the compression implications. I just want to know if I can expect it to act like an 80mm lens or if it will only show an 80mm file of view?

Thanks to anyone that can answer this for me.

John Hess's picture

Yes it will behave in all intents and purposes like an 80mm. But remember, its no the lens that compresses tge image, its the distance. An 80mm will force you to move back, putting more space and therefore more depth compression.

Almost always when someone presents an artcle or video it creates an enormous diversity of comments and opinions, which just goes to show what a difficult issue it really is. In the end what matters is results.
For various trend, technical and stylistic reasons photographers have become a little preoccupied with shallow dof over the past few years, perhaps we need to re-examine our motivations and the reality.
Shallow dof is largely about creating separation between subject and background, but there are many factors that play into that separation, as the article, video and comments have explored.
But separation also has a lot to do with presentation size and the type of display (print vrs screen).
The thing is if you are only looking at small web images you need shallow dof to get good degree of separation, hence ff dslrs with wide apertures might be optimal. On the other hand a look that succeeds in small web format often proves to be hopelessly soft for a medium to large scale print where the viewer probably is expecting a more immersive experience.
Oddly perhaps it may be easier to get an appropriate to "artistic intention" look with a smaller sensor for larger images (disregarding noise, IQ issues).
Your display can have an effect, anyone who owns an iMac with a 5k display will no doubt have noticed that images they once considered truly sharp can often look comparitively poorly resolved on their new display.
Print resolution can have an unfluence as can sensor and lens resolution and even post capture sharpening methods.
Basically I see it this way, dof effects are to a great degree about the difference between resolved and less resolved, the greater the peak resolution of the whole system the greater the potential for a visual difference and separation.
Concentrating purely on format or lens aperture is just part of the system and can and often does lead us down and expensive rabbit hole.

Markus Hofstätter's picture

I love shallow depth of field, thats the reason I went from APS-C to full frame, from full frame to medium format (film), from that to 4x5 large format (shooting wide open at 4.5) and from that to 18x23cm large format and now I ended up with 30x40cm large format.

For example 150mm on a 4x5 is like a 50mm on a full frame and my 380mm on 40x30cm is quite wide angle.

I imagine depth of field on different sensor/film/plate size like that:

The bigger the size of your sensor/film/plate is the more information it captures around your subject (if you would not move)

For example, if I shoot a headshot at F4.5 on a 30x40cm plate, you would see the whole face, the nose and forehead would be already out of focus.
If I shoot the same head, at the same distance on 4x5 palte or film, I would just see the lips for example, and they would look quite focused, on APS-C size you would just see a part of the lips and this would be tick sharp.

I attached a picture for better understanding

Who ever wrote that title does not understand how this works. One lens cannot and never will have different depth of field on different sensors. The only thing that differs is the crop. If I take a Nikon d7000 and a d800 and shoot with a nikkor 50mm f/1,8 the DoF will be exactly the same at the same distance from the subject and focus. The difference is that on the full frame sensor, I will "catch" more of the environment, since parts of the projected image on the sensor does not hit the sensor on the d7000 (cropped).

I wish this notion that DoF or that 50mm on a full frame = 85mm on a APS-C would go away. If I would claim that a photo shot with a 50mm on a full frame and then cropped away a portion of it, and then claim it was shot at 85mm, people who know their stuff would think me insane.

Anonymous's picture

I want my time back. What a stupid article.

Who the hell would ever shoot with a 36-50mp full frame camera, and in the middle of composing the photo, this thought crosses their mind:
"HMMM. IF I CROP THIS PICTURE TO APSC (1.5X ZOOM) IN POST, I CAN GET SLIMMER DOF!!!!"
No! No fruitcake is ever going to do this! They're going to step in closer to the subject and get the right composition duuuh. This article is all theory/calculations and ZERO practicability.

Then again I don't get why my Iphone6 cannot have the same DoF or look as my Cinelux Ultra 110/2 on Mamiya 645AFD/ZD, when both apertures are around f/2.0?

Terry Henson's picture

Sounds interesting...but in this day and age, almost 20 minutes is too long to say anything.

Justin Myers's picture

Cant wait for next weeks article, "shorter lenses create more Dof"

More comments