When talking about the differences between full-frame cameras and crop sensors, one of the biggest arguments in favor of full-frame sensors is the ability to produce images with a shallower depth of field. This was always my understanding of the subject as well. But after watching this video, I have seen the error of my ways. As it turns out, if all the variables are the same and the only thing changing is sensor size, the smaller the sensor, the shallower your depth of field.
I'm not going to try and explain all the science and math from the video, because the video does a much better job than I could even attempt. But my biggest takeaway from this video was when thinking about a sensor's crop factor and how that’s used to calculate a lens' equivalent focal length. Most people multiply the crop factor of a sensor by the focal length of a lens in order to get the full frame equivalent. The trick though, is that you need to multiply this crop factor by the focal length as well as the aperture.
The reason why it seems that full-frame cameras have a shallower depth of field has a lot to do with the focus distance needed in comparison to a crop sensor. The example below shows that in order to get the same frame of view on a crop sensor, you need to increase the distance of the subject. This added distance is what increases the depth of field on the crop sensor.
Who here just had their mind blown?
IMHO this is the wrong way of thinking. While this is scientifically true, it is very misleading to the junior photographer.
Instead of holding the focal length constant, we should be holding the field of view constant, which is what you need to do if you want to create two images that have the same perspective and cropping. In this case, if you hold aperture constant, the larger sensor will give the shallower depth of field.
Why? Because in order to create the same cropping AND field of view/perspective, you have to place each camera the same distance from the subject and give the larger sensor a longer focal length lens, which decreases the depth of field when compared to a smaller sensor.
I thought it was quite funny that he was out of focus for the entire video. Or do need new glasses!
yeah, i found that kinda annoying, especially considering the whole video was about focus. Other that that, It was a very informative video, if not a bit misleading.
Smaller pixels have a shallower DOF. Saying that smaller sensors do is a bit misleading as they would have to have the same pixel count as their FF counterpart.
Hi everyone - I'm the guy that wrote and appears in the video. This one was in research for a long time and I knew this would be a controversial topic (I remember Tony Northrup's video) so I purposely tried to create physical experiments that would demonstrate the phenomenon - before I wrote it I was one of those guys that says "of course sensor size doesn't matter' when yes it actually does in the opposite way we tend to say.
So to address a few questions that have popped up.
First you have to forgive me tying Pixel size to Circle of Confusion - I didn't say it was exactly the pixel size - but that it is LIMITED to the pixel size. You can't have a CoC that is smaller than the pixel size but the pixel size can be a lot smaller than the CoC. Where this analogy comes from is when I first started out shooting Standard Definition DV video I never had problems with focusing, then switching to HD and I noticed much more easily where the focus was off. If we were to pin the CoC to pixel size, then you can easily visualize how the smaller sensor has a shallower depth of field purely as a thought experiment (although this analogy works really well for video as we have standardized resolutions across sensor sizes).
But in reality - the Circle of Confusion is NOT tied to the pixel size.
The formula for CoC (mm) = viewing distance (cm) / desired final-image resolution (lp/mm) for a 25 cm viewing distance / enlargement / 25
Now generally we don't know what our final image size will be (enlargement) so a lot of people use something close the Zeiss formula which is d/1730 (sometimes d/1500) where d is the diagonal measure of the original image in other words the sensor.
So a full frame camera would have a CoC of 0.029mm. If we do unscientific math we would find a 12MP FF camera to have a pixel width of 0.008mm and a 50MP FF camera to have a 0.004mm pixel width. Both the pixels width are SMALLER than the CoC - so they would have identical DoF.
But what if we keep cropping in - enlarging the photo. As we make the enlargement bigger, or circle of confusion gets smaller (from that equation above). Make it big enough so that the CoC is about 0.006mm - and the 50MP camera will show a blur in the details that might have look tack sharp on a 12MP camera because the 12MP camera can resolve that high.
Regarding the notion that a crop sensor is JUST a cropped version of a Full frame and that DOF doesn't change just because you crop... you have to compare apples to apples. The crop image has to be ENLARGED to match ff image. Going back to our CoC equation - Increase the enlargement, the CoC gets smaller - shallower depth of field.
If you need proof - watch our video - there's evidence of it right there. Some one mentioned that my video was out of focus - I'm a bit soft when I'm front and center blown up big... but compare that to the shot where I'm in the corner. When my image is small, it looks sharp. It's the EXACT same video - I didn't change the focus but when it was small it's sharp, and when it's big is soft.
So consider the wide shot (where I'm small) to be the full frame and the close up to be the Crop Sensor. What looks sharp in the full frame, looks soft in the crop- right there is why smaller sensors have a shallower depth of field. ;)
Regarding this as the wrong way of thinking and that we should compare field of views.... In our video we do - it's the elephant in the room. But the phenomenon occurs and we have to try to understand it. We can't just brush it off as well scientifically it's right but it's wrong... because invariably some optics guy is going to walk into the forum and start pounding at the keyboard about how everyone else is wrong. The problem is he may not explain it well and then we have even more confusion.
So our goal was to put together a piece by piece explanation from the ground up. To do that we have to keep variables constant.
And now if my appeal to logic wasn't enough... here's a link to Zeiss's white paper on depth of field which I consulted throughout the research:
Page 9: From the section " Smaller film format with the same lens" Reducing the size of the film format therefore reduces the depth of field by the crop factor.
http://www.zeiss.com/content/dam/Photography/new/pdf/en/cln_archiv/cln35...
Hi John, thanks for joining the conversation!
First of all, thanks for the many great videos on sensors and lenses so far. I really enjoyed watching those in the past.
In this one there's nothing wrong either.
I think what's a bit confusing to many is WHY the smaller sensor has shallower DoF. It is because a smaller sensor with the SAME total pixel count has to have SMALLER pixels. Therefore the same CoC would cover more pixels on a smaller sensor which results in blur.
If you had a smaller sensor with the same pixel pitch of the larger one, it really would just be a crop with exactly the same DoF. You'll just miss a lot of information outside the frame.
In order to get a similar framing, however - you will have to back out with the camera. And this increased subject distance more than compensates the effect of the smaller pixels. And that's why in general, larger sensors are considered having shallower DoF. Simply because you could stand closer to the subject.
But overall I think the rather confusing subject of equivalency is well explained.
The smaller pixels is one way to think about it but it's important to remember that CoC is not tied to pixel size on the sensor. These CoC ideas were around during film which is agnostic to pixel size. The pixel thing was something I didn't anticipate in the discussions.
I came up with an analogy on another comment board. Look at the period at the end of this sentence. It looks like a single dot... That's a circle of confusion, whether it's made by a single pixel or a hundred smaller pixels doesn't matter, it's small enough to be considered sharp. If we zoom in we are charging what we consider sharp, zoom in enough and we'll see it's not a dot but a bundle of pixels, zoom in further and we'll see it's a bunch of carbon atoms!
So basically it's not about how sharp it really is, it's about what we consider acceptably sharp and magnification plays a big role in that.
Agree 100% - as I mentioned in another comment before (quoting myself here, duh!):
"The rest is just theory on when to consider a circle small enough to be called sharp."
What counts is how we view the picture afterwards. (e.g. print size and viewing distance)
Hi John,
I've found your videos on lenses informative I saw the latest one over on SLR Lounge. Even though I now own a DSLR, I'm still stuck in the film world because that what I'm most used to from 35 years. Different film formats have different focal lengths as to what is wide angle, normal and telephoto. I've been doing research on medium format photography and a normal lens for 6x4.5 is 80mm and for 6x7, it is 110mm.
Read this section => https://en.wikipedia.org/wiki/Depth_of_field#Relationship_of_DOF_to_form... <= and see under what circumstances a smaller sensor has a shallower depth of field.
So far I understand it's only when a picture is taken from the same distance using the same f-number, same focal length, and the final images are the same size. Which means it's not an equivalent comparison, since the FoV is not the same.
Right, its not an equivalent which is why we need the crop factor to determine lens equivalent.
So practically speaking, in every day use, a bigger sensor will have a shallower depth of field in comparison to a smaller sensor.
Just a longer way of saying the same old thing. Depth of field depends on the aperture and the magnification -the relation between the object size and the size on the sensor- , and the magnification depends on focal length, distance to subject and sensor size. That's why when you move the cameras so you have the same field of view, you have exactly the same image with the same aperture (and iso).
LOL...next week on FStoppers: "Sales of APS-C cameras skyrocket!" ;)
How does fstoppers work by the way, can the authors just upload an article or is it checked by other authors or a senior or something?
Like someone says, this is wrong info and may put newcomers to make wrong decisions. This article should be removed.
When I read the title of the article I thought, "That's backwards!" Then I read the article and I realized the author is confused and writing about things he doesn't understand. After reading the comments and his replies I am even more convinced of that.
Who edits these articles for fstoppers? This is a great example of misinformation on the web.
I think it would have helped if the author had made more of a point that he is departing from the classical, textbook definition of depth of field which doesn't consider sensor resolution. It is true that the perceived sharpness of the final image is affected by both the resolution of the sensor and the resolving power of the lens which he did not mention. These factors have an overall effect on the sharpness of the image which combined with the DOF dictate the range of distance where the image is acceptably sharp. This final result could be called "effective" or "practical" depth of field as not to confuse the reader with the classical definition of DOF. There is already more than enough confusion when it comes to DOF.
This IS the TEXTBOOK definition of Depth of Field. Circle of Confusion is not tied to pixel size but it works as an analogy.. Given an infinitely sharp lens and an infinitely sharp sensor, the smaller sensor will have a shallower depth of field given the same lens
My original post was trying to help you but you apparently didn't see it that way. Making a statement like the one in your response just confuses people. Remember that the textbook definition of DOF as it applies to photography refers to what a person with normal eyesite will deem to be in acceptable focus when viewing an 8X10 print from about 1 foot distance.
For example, one takes pictures of the same subject at the same distance and f-stop with full frame and crop sensor cameras. If the resulting files are used to create identical pictures (meaning same framing of the subject) then there will be no difference in the DOF between the two prints.
If, on the other hand, one wants to use the entire sensor image and have the same framing in both pictures then they must increase the distance to the subject when taking the picture with the crop sensor camera. In this case when prints are made from full sensor images, the one shot with the crop sensor camera will have a GREATER DOF. This is due to the fact that the DOF increases by the square of the distance to the subject while the loss in DOF from the increased enlargement of the crop sensor image is linear with subject distance creating an overall increase in DOF.
This is the reason why folks say the crop sensors provide greater DOF because they are referring to the final result using the full sensor image. This is the desired situation for optimum image quality and part of "getting it right in the camera".
Thanks for clarification of your intent ;) I don't think there's factually anything we disagree on. If you watch the rest of the video, all of it was addressed.
But the confusion as I see isn't in my statement of the fact, but in the improper application of a rule of thumb. Thinking the sensor causes shallower depth of field leads to long comment discussions like this one where other folks are arguing over things they don't understand. If you're going to try to understand lens equivalents, why half ass the explaination?
Also to nitpick, changing distance will get subject size identical, but it will change the perspective so its not really a lens equivalent. I was taken to task for not mentioning that on YouTube.
I think this video explains it https://www.youtube.com/watch?v=f5zN6NVx-hY
edit: is the video mentioned by the author of the video in the comments
"Equivalent focal length" means nothing. Crop factor is a concept invented to help photographers transition from the 135 frame to the newly developed, smaller digital sensors that were available when DSLRs first hit the market. I'm not sure what the point of this article is.
Please at least adjust the title to mention focal length equivalents. Ex. https://www.slrlounge.com/depth-of-field-and-lens-equivalents/
Oops, the video in the article got it wrong.
The instructor has as many before him fallen into the trap of changing two or more variables and saying the resultant is down to changing one variable. He changed sensor size, PLUS pixel density PLUS print magnification and attributed the result to just sensor size.
So lets try a really simple experiment. Make a 10×8 from a FF size sensor, now cut the print down to 8×6. This is about the same as going from a FF sensor to a crop sensor. So all we have changed, in effect, is the crop factor of sensor. Pixel density remains the same AND so does the print magnification. Has the DOF changed? No, it remains exactly the same!
So if we now blow up our cropped 8×6 print to 10×8 and compare it to the original uncroped 10×8 has the DOF changed. Yes, as we now have increased our magnification and so also increased the Circle of Confusion.
Now to compare pixel density, we can say thank you to Sony for the wonderful A7 range. They have 3 FF cameras with different pixel densitys. A 16MP, a 24MP and a 42MP. Will there be a different in apparent DOF between them. Yes there is.
The video did say that was comparing the same size prints (10x8) for each system and that most crop sensor cameras have higher pixel density, so why not also include the last variable and include changing the focal length of the lens to compensate for the change in angle of view for the different size sensors? So now a 50mm lens on a FF camera is about 47° and on a crop sensor camera we need about 35mm to give us the same 47°. If we shot at F8 this means that our taking aperture is 1/8 of the focal length so our FF camera has an a taking aperture of 6.25mm (for the 50mm) and its about 4.3mm (for the 35mm) on the crop sensor camera. And as we know a smaller aperture give us a larger DOF.
So the result of all of this is that changing the sensor size on it own does NOT change the apparent DOF, it is only when we change another variable to compensate for the smaller sensor that we get a change in apparent DOF.
Thomas
Pixel density has only a tennuos relationship. Don't make the mistake of tying CoC to pixel depth. A FF 12MP camera has the same DoF as a 50MP camera. The difference is only in the enlargement.
And since when would it make sense not to compare the same size print? Do you print out MFT pictures at half the size to compare to that of a FF camera? Does a MFT camera shoot half resolution HD video of its FF brother? Of course not, the only reason is to support your argument that they shoot the same DoF but youre introducing a unrealistic variable.
Hi John, thanks for getting back to me. As for the same size print, I think you should only change one variable at a time and then look at the result. Otherwise, the correct conclusion would be "The Smaller the Sensor Size PLUS a greater image enlargement to give you the same size print, the Shallower Your Depth of Field".
As for introducing an unrealistic variable. No, I am reducing the variables to just one variable at a time. Your approach is:
Change A plus change B gives you C. Therefore, change A give you C.
My approach is:
Change A. Does that give you C? Answer: No
Change B. Does that give you C? Answer: Yes
Therefore, change B gives you C.
Is my approach correct?
No your approach is not correct because you're assuming that final print size isn't a variable.
If you take a print and cut out a smaller portion of it you haven't changed the enlargement, but you have changed the proportions. And circle of confusion (and DoF) is defined by a certain sized print size viewed from a certain distance with a certain enlargement from the original (sensor size).
So by definition when you cut the image - you have changed a variable.
I think we both agree on fundamental details of this subject. The fact is image size and enlargement are tied together - make a change in one and the other must change as well - you can just isolate one variable. Starting with equal enlargement holds no more mathematical purity than starting with equal print sizes. But starting with equal print sizes is much more common place. In video, large sensors shoot the same size image as small sensor cameras. When people walk into Costco to print their photos they select the print size, not the magnification from their sensor. Lastly, holding equal print sizes lets up discuss lens equivalents - which if we just held magnification constant - would be pointless because all lenses would be equivalent on all sensors.
If I am understanding this correct I think I knew it because I always think about distance of camera to subject and subject to bokeh'able background.. So while I hadn't really considered it as a factor when choosing full frame vs. crop, it makes sense.
Great video. Its always nice to see the science behind the art. I have a question though: Is image compression a product of the lens, sensor size, or a combo of both? I have a Sony a7M2 and 50mm Zeiss loxia and realized I had an ASP-C option that could effectively make the lens an 80mm equivalency. I would love to experiment with this in portraiture because of the compression implications. I just want to know if I can expect it to act like an 80mm lens or if it will only show an 80mm file of view?
Thanks to anyone that can answer this for me.
Yes it will behave in all intents and purposes like an 80mm. But remember, its no the lens that compresses tge image, its the distance. An 80mm will force you to move back, putting more space and therefore more depth compression.
Almost always when someone presents an artcle or video it creates an enormous diversity of comments and opinions, which just goes to show what a difficult issue it really is. In the end what matters is results.
For various trend, technical and stylistic reasons photographers have become a little preoccupied with shallow dof over the past few years, perhaps we need to re-examine our motivations and the reality.
Shallow dof is largely about creating separation between subject and background, but there are many factors that play into that separation, as the article, video and comments have explored.
But separation also has a lot to do with presentation size and the type of display (print vrs screen).
The thing is if you are only looking at small web images you need shallow dof to get good degree of separation, hence ff dslrs with wide apertures might be optimal. On the other hand a look that succeeds in small web format often proves to be hopelessly soft for a medium to large scale print where the viewer probably is expecting a more immersive experience.
Oddly perhaps it may be easier to get an appropriate to "artistic intention" look with a smaller sensor for larger images (disregarding noise, IQ issues).
Your display can have an effect, anyone who owns an iMac with a 5k display will no doubt have noticed that images they once considered truly sharp can often look comparitively poorly resolved on their new display.
Print resolution can have an unfluence as can sensor and lens resolution and even post capture sharpening methods.
Basically I see it this way, dof effects are to a great degree about the difference between resolved and less resolved, the greater the peak resolution of the whole system the greater the potential for a visual difference and separation.
Concentrating purely on format or lens aperture is just part of the system and can and often does lead us down and expensive rabbit hole.
I love shallow depth of field, thats the reason I went from APS-C to full frame, from full frame to medium format (film), from that to 4x5 large format (shooting wide open at 4.5) and from that to 18x23cm large format and now I ended up with 30x40cm large format.
For example 150mm on a 4x5 is like a 50mm on a full frame and my 380mm on 40x30cm is quite wide angle.
I imagine depth of field on different sensor/film/plate size like that:
The bigger the size of your sensor/film/plate is the more information it captures around your subject (if you would not move)
For example, if I shoot a headshot at F4.5 on a 30x40cm plate, you would see the whole face, the nose and forehead would be already out of focus.
If I shoot the same head, at the same distance on 4x5 palte or film, I would just see the lips for example, and they would look quite focused, on APS-C size you would just see a part of the lips and this would be tick sharp.
I attached a picture for better understanding
Who ever wrote that title does not understand how this works. One lens cannot and never will have different depth of field on different sensors. The only thing that differs is the crop. If I take a Nikon d7000 and a d800 and shoot with a nikkor 50mm f/1,8 the DoF will be exactly the same at the same distance from the subject and focus. The difference is that on the full frame sensor, I will "catch" more of the environment, since parts of the projected image on the sensor does not hit the sensor on the d7000 (cropped).
I wish this notion that DoF or that 50mm on a full frame = 85mm on a APS-C would go away. If I would claim that a photo shot with a 50mm on a full frame and then cropped away a portion of it, and then claim it was shot at 85mm, people who know their stuff would think me insane.
I want my time back. What a stupid article.
Who the hell would ever shoot with a 36-50mp full frame camera, and in the middle of composing the photo, this thought crosses their mind:
"HMMM. IF I CROP THIS PICTURE TO APSC (1.5X ZOOM) IN POST, I CAN GET SLIMMER DOF!!!!"
No! No fruitcake is ever going to do this! They're going to step in closer to the subject and get the right composition duuuh. This article is all theory/calculations and ZERO practicability.
Then again I don't get why my Iphone6 cannot have the same DoF or look as my Cinelux Ultra 110/2 on Mamiya 645AFD/ZD, when both apertures are around f/2.0?
Sounds interesting...but in this day and age, almost 20 minutes is too long to say anything.
Cant wait for next weeks article, "shorter lenses create more Dof"
See the math here: http://www.bhphotovideo.com/explora/photography/tips-and-solutions/depth...
I know I am super late to the party, but the truth is: The larger the sensor, the shallower the depth of field.
For example: if you look at the footage of a GoPro, do you notice how there is basically no bokeh (unless you get the lens literally right up next to a subject). Same goes for a iphone 6 (except an iphone 6 has a slightly larger sensor so you can get a little shallower depth).
The multiplication of the depth of field by the crop factor does not affect the depth of field in the same way an optical zoom works (that would be like claiming that cropping a photo in Photoshop would make the depth of field shallower).
"The Smaller the Sensor Size, the Shallower Your Depth of Field"
That statement is not true it is comparing apples with oranges!
In fact with the same field of view and aparture the depth of field on the larger of two planes will have a shallower depth of field.
Asuming two sensors with an equivalent number of pixels you apear to saying that the depth of field on a smaller sensor will be shallower this has to do with pixel density and resolution, nor have you have not taken into account that tightly packed pixels bleed in to one another and increase noise!
When all things are equal; the field of view, aparture the quality of a cameras recording plane then larger the plane the the Shallower the depth of field. So it's not true to say "The Smaller the Sensor Size, the Shallower Your Depth of Field"
Mobile phones with tiny sensors don't produce a shallow depth of field...
Hopefully someone can explain to me why I had a shallower depth of field :
My camera = Canon 70D
Settings = 18mm (18-55 kit lens); F 4.0, 1/30, ISO 400, Speedlite 600EX II-RT fired; shooting in Auto setting
Shooting next to me....
Friend's camera : Canon G16
Settings = 6.1mm; F 1.8, 1/80, ISO 400, flash not fired
It was a group photo almost edge to edge.
My friend's shot ended up sharper. Edge people still a bit out of focus...but my edge people were much more so.
The first pic is the G16, the second is my 70D.
At a higher aperture, bigger sensor, same distance, shouldn't my pic have a greater depth of field?
I don't think this is a DOF issue. I can't tell if it's out of focus, or if it's just motion blur from shooting at a slow shutter speed. The G16 was shooting a faster shutter speed, and also includes built-in image stabilization to reduce the effect of camera shake.