Why Are Wide Angle Lenses Misunderstood and Avoided for Portraits?

Why Are Wide Angle Lenses Misunderstood and Avoided for Portraits?

You've heard that a portrait lens is the one with a focal length of 50mm or above and that wide angle lenses create a distorted image when used for portraits. This article will try to help you understand and overcome that prejudice.

What Is a Portrait?

Portraits of people can be all kinds of images, from the paintings of the masters of old to the masters of photography today. It's an image of a person fit in a frame of a certain geometry. The portrait is not just the head of the person. There's a special term in photography for that: a headshot. A portrait can be a full-body image too.

A senior portrait taken with a wide-angle lens

'Wide Angle Lenses Distort the Face,' They Say

This is where the misconception comes. In theory, a wide angle lens should capture more of the view in front of the camera than a longer lens, which means it will change to a certain extent the viewing perspective from what we see with our own eyes. Objects that are close to each other will look farther apart with a wide angle lens. It will also have different depth-of-field properties per aperture: wide angle lenses have a deeper depth of field than longer focal lengths at the same aperture. For this reason, the blur of the background and the foreground (assuming the subject is in the mid-ground) is more prominent with longer lenses.

"OK, but wide lenses distort the image. This is why they are not liked for portraits; it's not the depth of field or what they see." This is the common complaint.

Alice in wonderland series - the rabbit hole. The image is taken with a wide-angle lens

Why Is There Distortion With Wide Angle Lenses?

If you have shot with a wide angle cinema lens, it will change your mind about what you call "distortion." The optical defect that makes most photographers avoid using wide angle lenses is a radial-looking distortion that is different from the perspective distortion of the lens. The radial distortion is making the image look as if areas closer to the periphery of the lens are bent inwards or outwards.

To illustrate that in a different way, let's suppose we have a wall, which we look at from the side with a 50mm lens on a full-frame sensor (a) and with a wide angle lens on the same sensor. A wide angle lens without radial distortion (b) will change the perspective, as you see in the middle, while a radial distortion defect (c) will bend the lines that are closer to the periphery of the frame, and thus, you will no longer have straight lines in these areas.

Normal view vs. perspective vs. radial distortion

There's a radial distortion with longer lenses too, but few people complain about it. That defect is not supposed to be there. As making a lens without or with a minimal radial distortion is an expensive process, most manufacturers decide to leave it, and today it's used for artistic purposes. If you purchase a quality wide angle cinema lens, the radial distortion there is very small if not almost eliminated. If you take the radial distortion seriously and correct it with any lens, you use your will start loving the different perspective of the wide angle view without the lines bending and will start to incorporate that look more and more in your images.

In the following video, you will find tests with different 14 mm cinema lenses and the radial distortion (called simply "distortion" in the video) will be close to none. You may be surprised how a wide angle view should normally look compared to what most still lenses do today.

How to Deal With Wide Angle Lenses' Radial Distortion?

The obvious remedy is to buy lenses with almost-eliminated radial distortion, but if you can't afford them, learn how much your lens is distorting the image, as it's different from lens to lens. There are two methods, which work best if they are both applied: compose accordingly and fix the rest in post.

Important Things Should Be Away From the Edges

In most cases, wide angle lenses tend to deform the areas close to the borders of the frame. If you have objects there, whether these are human faces, limbs, or rear parts of the body, they will look bigger than normal. To avoid that, keep important objects away from the edges of the frame, also because you may need to crop these areas out in post.

Fix It in Post

You can either crop out the border regions or try to correct some of the distortion to an extent that it's bearable, which will inevitably make you crop some of the pixels out.

Use a Camera With More Resolution

If the image is heavily distorted, you can crop out some of it and leave the normal-looking part without losing much of the resolution that is usually used today.


Avoiding wide angle lenses is avoiding the defects they are built with. Knowing how to use them will help you take advantage of the environment you see with your own eyes. Including more of the environment tells a different story and lets the viewer's eyes wander around, enjoying the details, especially if they are masterfully presented.

Log in or register to post comments


Dave McDermott's picture

Up until recently I never would have considered using a wide angle lens for portraits, but I really like the results I have gotten with mine. It certain situations it can work wonders.

Tihomir Lazarov's picture

Yes, indeed. The perspective is unique especially if the image doesn't look crooked, but simply "different."

Eric Salas's picture

Because people read and stick to rules too much instead of shooting.

Tihomir Lazarov's picture

The same with shooting landscapes with long lenses.

Why are [INSERT THING HERE] misunderstood and avoided for [INSERT PURPOSE HERE]

Because people who beginners look up to, misuse terms, say things which are incorrect, oversimplify, and leads them to misunderstandings and fear.

Here a few of these things, in this article alone.


«…a wide angle lens… than a longer lens,… will change to a certain extent the viewing perspective….»
Perspective is simply a matter of where we stand relative to the subject. It is not in anyway* dependent on the lens, nor the sensor size. *Unless one insists that a certain lens or sensor size “makes” them come in closer or stand further back, thus a correlation. But the effective change in perspective is achieved by the position in space, not the lens on the camera. If three photographers stood at the same spot with three different lenses and cameras, they would all get the same perspective.

«Objects that are close to each other will look farther apart with a wide angle lens.»
Again, the perspective is not from the lens, but from the relative distances between the objects and the camera. It does not matter what lens is used, if the photographer is in the same spot, and/or the relative object-camera distances are the same, the objects will NOT look any closer, nor further apart.


«…wide angle lenses have a deeper depth of field than longer focal lengths at the same aperture.»
Many photographic teachers keep using the term Aperture and f-Number interchangeably, They are not. Aperture is the diameter of the iris, measured in millimetres, and f-Number, ‘N’, is the focal ratio, which is the ratio of the focal length, ‘f’, to the aperture, ‘D’. and has no dimensions. It is given by,
which is why Aperture is stated as f/2.8, f/3.5, f/4, f/5.6, etc. The aperture is the diameter, not the f-number, N, stated as 2.8, 3.5, 4.0, 5.6, etc.

That being said, the DoF decreases with aperture, (and is the same for any given aperture on any lens), and the amount of light entering the box decreases with f-Number. Any lens with f-number 4.0 will let in the same amount of light, but any lens with an aperture of 25mm will have the same DoF. To wit, a 50mm @ f/2.0, a 100mm @ f/4.0, and a 200mm @ f/8.0 will all have a 25mm aperture, and give the same DoF, but will require adjustments in either Lv, Tv, Sv, or EI to compensate for exposure.


«The radial distortion is making the image look as if areas closer to the periphery of the lens are bent inwards [pin-cushion] or outwards [barrel].»
You have actually got this correct. Radial distortion can and does happen on less expensive lenses, regardless of the focal length, and, whereas many people think that wide-angle lenses tend to barrel, [your (c) illustration], and telephoto lens tend to pin-cushion, this is not always the case, and one can see pin-cushion distortion on a wide lens, just as easily as they can see barrel distortion on a long lens.

What you got wrong was the illustrations (a) and (b). The difference seen here will only happen if the photographer got closer to the wall with the wide-angle lens. If he stayed at the same distance, he would have seen the same wall, just smaller.

«…wide angle lenses tend to deform the areas close to the borders,… they will look bigger than normal.»
In your defence, you did say, “In most cases.” I have not done any sampling to see if that is true, but I can say that, in some cases, they will look smaller than normal. To be more precise, when there is barrel distortion, the edges will look bigger, and the corners will look smaller. When there is pin-cushion distortion, the corners will look bigger and the edges will look smaller.


When one does radial distortion correction in processing, —or any kind of geometric distortion corrections— it will result in the need to crop out the dark areas which are left behind, (or fill them somehow). What one needs to also add to this is that cropping will always give the effect of using a lens of a more narrow FoV, as cropping narrows the FoV. That may seem kind of obvious, but when it comes to teaching beginners, we need to not assume that things are obvious.


«Avoiding [INSERT THING HERE] is avoiding the defects they are built with.»
That is only with those things which have defects, —which, arguably, is everything— instead of looking for the things without obvious defects, like better lenses. Most lens reviews discus how much radial distortion is noticeable in the image, and, just like lenses with heavy diffraction, bright internal reflections, copious chromatic aberrations, etc., are to be avoided, so are [any] lenses with plenty of radial distortion.

However, the best lens is the one you have, so one must learn how to cope with the issues at hand, whatever they may be.

Eric Salas's picture

Can you put this in clif notes?

Tihomir Lazarov's picture

When photographers shoot from the same place they all will see the same objects regardless of the lens, but the wider lenses (without the radial distortion) will change the perception of distances between objects. It will not see what's behind that corner or something else, but the way the objects are projected onto the sensor is different than with other focal distances. There are 2 distortions: the normal change of objects relation perception and the defects that are introduces such as the radial distortion (like a barrel distortion). When we eliminate the radial one we are left with the other "distortion" which changes the perceptions of distances between objects like the further away objects seemingly look further away than they are in relation to closer objects.

One can't avoid the perspective perception change with a wider lens even without the radial distortion, because it's simply physics. The standard known formula for a magnification glass is 1/image + 1/object = 1/focal_length, where "object" is the distance from the object to the lens and "image" is the distance of the object to the image (an imaginary representation). Depending on where the sensor is the image will be projected either on the sensor plane before or after the sensor. If we have it projected before or after the sensor we have a blurred object.

If we know the distances to the objeects, say 100cm, 1000cm, and 10,000cm (which are 10 times apart from each other) on the image it looks quite differently, because we don't have a linear conversion from the real world on the image plane (sensor, film, etc.), but it's more like a tangent function. In this case if we set the focal_length to 10mm we will have:
1/image = 1/10 - 1/object

Now for the 3 different object distances we have:
1/image = 1/10 - 1/100 = 90 / 1000, or image = 11.11cm
1/image = 1/10 - 1/1000 = 990 / 10000, or image = 10.1cm
1/image = 1/10 - 1/10000 = 9990 / 1000, or image = 10.01cm

If you take an object that is 100,000cm away, on the image it will be at 10,001cm

From this you can see that the closer the object the more prominent the projection position is, while with objects further away, the projections don't look that further apart. For that reason closer objects (on a wall that you look from the side) will look further apart while distant objects (at the end of the hallway) will look very distant at small focal lengths in comparison with the closer. But at the same time distant objects will look very close to each other. For this reason the objects in the middle that are in the distance look like they are vanishing quickly in the vanishing point than the objects that are closer.

And let me clarify once again: this perspective calculation doesn't have anything to do with barrel or other radial distortion. The radial distortion is changing the image on top of the perspective perception change.

This is why the perception of perspective is changed by the lens (contrary to your statement), but is not changed by the sensor.

Kirk Darling's picture

Tihomir Lazarov , nope. Your basic statement " the wider lenses (without the radial distortion) will change the perception of distances between objects" is incorrect, and all following from that is incorrect.

The lens doesn't change perspective, the distance from the subject and other objects changes the perspective. If you do not change and of the distance relationships, but change only the lens, the perspective remains the same.

Put the widest lens available on your camera and take a shot without changing any distance relationships. Then mount successively longer lenses, taking more shots with each lens. You will find that angle of view is decreased, but perspective remains exactly the same.

In fact, if you crop the wide-angle lens to match the cropping of the longer lenses, you will have the same perspective matching each crop with each longer lens.

And you can go the other way by using a "long" lens on a larger format. Put a 100mm lens on an 8x10-inch camera, and the image will look "wide angle," but crop a 24x36mm rectangle out of that image, and the crop will look "long" again.

Changing the lens changes the field of view, not the perspective. Changing distance along with changing the lens changes the perspective.

Tihomir Lazarov's picture

The more correct term (that I used in the comment above) is "perception," not "perspective." It's squeezing pixels in the middle which changes the perception, not looking around the corner which is a change of the perspective. Objects will be again one behind the other, but pixels will be squeezed differently throughout the frame and thus perception for distances is changed. In the article I used the term "perspective" in combination with "to a certain extent," which wasn't technically correct although I clarified that I'm talking about objects distances in relation to each other, but here in the comment I used the right term: perception. I've never stated that you can see more of an object with one lens than with the other (when both objects are in the frame).

This is the reason why it's called "perspective distortion," which takes a perspective and distorts it working only with what is coming through the lens. The latter is constant, while the lenses are variable and thus the perspective distortion is different, but based on the same source objects (or source pixels if you will).

The perspective distortion will distort a flat image just like a filter in Photoshop. It won't change the objects in a 3D space but will squeeze or stretch "pixels."

Kirk Darling's picture

You missed the point. There is no distortion, perceived or otherwise, changed by lens focal length.

The change is made by the distance relationship of the camera to subject.

Tihomir Lazarov's picture

What you don't take into an account is that this is not a camera obscura type of physics, but you have a series of convex and concave glasses which help see more than you could see with a camera obscura on the same sensor size.

In a camera obscura scenario (or a pinhole camera) you see more if you make your senso or your opening bigger. When your sensor area is small you can't project an object that is outside the field of view. With a lens that has a small focal distance (and relative to the sensor size we can call it a wide angle lens) you project the objects that are outside the field of view and thus it compresses the information. This makes the stretching of the image in a certain way that is expressed by the mathematical equasion I showed above. You can't escape that distortion (not talking about a radial distortion here) because you have non-flat lenses that work in an ensemble to make the objects project onto the sensor.

This is the reason interiors are not to be photographed with wide angle lenses regardless of the presence or abasence of radial distortion. Interiors must be shot with bigger sensors and normal lenses that don't distort the objects in order to fit them into the same sensor.

I'm attaching an image to show what a camera obscura would see (the top drawing) and why the sensor size is directly related to the field of view that will be projected onto the sensor. If you want to see more, you create a camera obscura with a bigger sensor because your "lens" is the same. But if you put a series of concave and convex lenses in front of the camera obscura opening in order to create a wide angle lens, you will get objects distorted (assuming no radial distortion is present).

You fail to see the flaw in that, very wide angle view, (yes, clearly exaggerated to demonstrate the effect, but you have an impossible scenario of a concave lens forming an image. That diagram does nothing much to illustrate the real world scenario.

Using a pin-hole camera, I basically reduced the lens to a “perfect, simple” lens. It gets rid of the formulas for CA, ED, etc., and makes things simple for our calculation. For a pin-hole camera to see the tree displayed, it merely needs the screen much closer, and much larger. That would be the equivalence of a wide-angled, simple, perfect lens.

In the real world, such a lens does not exists, which is why we get bokeh, CA, ED, etc., and have to consider rectilinear, stereoscopic, and curvilinear projections.

Still, you ore thinking in the box, and if you took these same diagrams and looked at the it from out of the box, you would see the simplification of the entire thing.

Perception of Distance

The perception of distance between objects in a 2-dimension photo is caused by the relative size of the objects. That is directly related to the ratio of the distances from the lens to the objects.

Your calculations are in the box for a 10mm lens, and is mostly irrelevant, because the position of the image relative to the film plane is not the deciding factor for the object size on the film. Think outside the box for a moment.

Using simple geometry, (no trigonometry required at all), taking similar objects you just did, one 1m away, one 10m away, one 100m away, and making them each 1m tall, then projecting them unto a flat screen through a pinhole, we see that the sizes of each object relative to each other is proportional to the distance each object is from the pinhole.

With the film plane 10mm from the pinhole, (10mm lens equivalence), or 100mm or 1,000mm from the pinhole, (100mm and 1,000 mm focal lengths equivalence), the relative sizes, ergo, the relative difference in apparent (perceived) closeness to each other, remains unchanged. This is true no matter the size of the screen the pinhole image is projected on. Replacing the pinhole with lenses, and the screen with film, does not alter the mathematics.

The triangles do not change, so the apparent sizes, hence the perceived distances, do not change. The only thing which changes the triangles, is moving the camera position, or moving the real distances between the objects.

user-206807's picture

Karim Hosein, I see that you voted me down… free to you to do it of course, but like this you just show how arrogant you are not accepting to be corrected when you say (write) something wrong. Any lens change the perception of distances between objects, it is just a fact of physics, and you can write all what you want I am sure that you will not change the laws of physics…

user-206807's picture

Any lens change the perception of distances between objects…

user-156929's picture

Of course you're right but some of us just shoot so practical inaccuracies are more beneficial than pedantic truths.

If all one does is just shoot, then the practical inaccuracies are neither harmful nor beneficial. That aside, I was referring to those trying to learn something to help them, not those who just shoot.

user-156929's picture

Help them in what way? Knowing certain lenses have certain effects should be enough. The whys and wherefores may be interesting but to what benefit?

«…certain lenses have certain effects….»
And there we go again. That sort of false information is not useful. Useful information is that perspective is a property of where one stands relative to the subject, and focal length sets FoV around that perspective.

There are examples of this misconception with long and short lens. First, two photographers standing in precisely the same spot, shooting planes in formation, one with a 200mm, the other with a 300mm, and the one with the longer lens, insisting that he will get better compression due to his longer lens. Second, two photographers shooting a house interior, one with a rectilinear fish-eye lens, the other with a “normal” lens, (focal length, 35mm, approximately equal to the image width, 36mm). The one with normal lens says that he takes several images and stitches them together, and because he is not using a fish-eye lens, he gets no “perspective distortion.”

Both of them are wrong. The photographer with the 250mm lens gets the same “distance compression” as the one with the 450mm lens, because they are shoting the same subject from the same place. The Photographer with the fish-eye lens and the one who is wasting time stitching together from the 35mm lens will get the same “perspective distortion” because they are shooting the same room from the same spot.

If they were taught that truth, that perspective was a matter of subject to camera position in space, then they would not be trying too hard to get an image which could be easier to do.

If one want s to talk about certain lenses have certain effects, talk about rectilinear projection lenses versus stereoscopic projection lenses. Those are certain effects which can be useful to know when taking certain pictures.

user-156929's picture

Well, then, good luck with that.

Tihomir Lazarov's picture

We're not talking about perspective changes here. Let me repeat that: it's a distortion just like getting an image in photoshop and applying a perspective transformation. In order to fit all the pixels in a frame the wide lens has to alter the image in some way and the simple calculations above show how each pixel finds its place. If you photograph something with two lenses (wide, 35 and longer, say 70) with objects that are close to you and are from both sides of the middle of the frame, you will see that the formula above is correct: the center parts of the image look quite similar with both lenses, because they are further away, but those that are closer to the frame and invisible for the "normal" lens (say a 50 on a 35 sensor) are much closer than those on the longer lens. If you crop out the sides of the frame, you are like looking through a longer lens, because that part of the optics that makes the different look is cropped out. And yes, first remove any radial distortion or shoot with a high quality wide angle lens that has minimized that.

Let me put it in a different way: If you want to see more of a scene: You watch something through a tube and you see a certain part of the environment. In order to see more of the scene you have to go back, but this will change the perspective entirely. For this purpose a number of lenses are made which are not a flat piece of glass, but with certain curvature that will allow you to project objects that are not normally visible through the tube into the same sensor size (the back opening of the tube). The objects that are straight ahead do not get much distortion, because they come straight through the glasses which can be almost assumed as "a flat piece of glass" in the center and there's not much bending. This is why when cropping the center of the image of a super wide angle lens, we don't see anything different from what we photograph with a super telephoto lens. The changes start to happen when we deviate from the center. Most high-end wide angle lenses try to keep straight lines straight, but as you have mentioned they distort other geometric shapes. This is what we compensate with for including more of the environment than we normally see through the tub without any lens.

«…just like getting an image in photoshop [sic] and applying a perspective transformation.»
«…to fit all the pixels in a frame the wide lens has to alter the image in some way….»
«…Most high-end wide angle lenses try to keep straight lines straight, but as you have mentioned they distort other geometric shapes. This is what we compensate with for including more of the environment….»

This is not a “Distortion” issue, but a “Projection” issue. It has little to do with the focal length, and more to do with the lens formula. This issue also occurs in tele lenses, but is only noticeable with a wide FoV.

Having said that. the issues of “edge distortions” which you bring up, although only noticeable with a wide FoV, hence, —by my definition of what constitutes a wide angle lens in another thread— only occurs in wide angled lenses. However, the nature of these edge distortions is not necessarily as you descried, as it depends on the projection of the lens one is using.

So going back to the statement, «One can't avoid the perspective perception change with a wider lens even without the radial distortion, because it's simply physics,» the answer is, “Yes one can, by simply choosing the correct projection, as the problem is not so much physics, as it is geometry.” The error with this is that, unless one has more than one wide lens, one has little choice but to fix in post, which ultimately means cropping the image.

«You watch something through a tube and you see a certain part of the environment. In order to see more of the scene you have to go back, but this will change the perspective entirely. For this purpose a number of lenses are made which are not a flat piece of glass, but with certain curvature that will allow you to project objects that are not normally visible through the tube into the same sensor size (the back opening of the tube).»

This analogy is that of the diagram you gave earlier, with the same fallacy, because other things change; the diameter of the tube, the length of the tube, and the distance of the tube from the sensor. This is one reason why Nikon et al are touting shorter flange distances; to make better wide angled lenses. On most SLRs, there is a limit to the minimum focal length lens available, and it is not a matter of projection, nor distortion, but a matter of the ‘length of the tube’. To accommodate this issue, lenses have been designed in the past which require the mirror to be locked up, so that they can set the elements of the lens further back in the camera.

Now when a camera has a flange distance of 45.46mm, —Pentax K-mount— the lens does need to do some pretty fancy stuff to allow a 10mmm lens —Pentax smc P-DA Fish-Eye 10-17mm ED (IF)— to focus to infinity. This fancy footwork does not prevent certain AoV or certain projection lenses, it just makes them more difficult to manufacture. The focal length (or AoV) of the lens does not cause the “edge distortion,” but the projection of the lens and the lens formula does.

«The changes start to happen when we deviate from the center.»
True with any projection. More true, when the AoV becomes greater, the effect of the projection becomes more pronounced, as we look towards the edges/corners. cartographers have had this issue for years. When mapping a 1 square kilometer piece of property near the equator, it really does not matter much, which projection they use. However, as they move towards the poles, or they start to map larger land masses, the projection of choice makes a big difference.

Which projection is correct? It really depends on for what purpose the map is needed. Same is true with photography and lenses. To wit, teaching, “One will get such-and-such ‘distortion’ when one uses a wide angled lens,” or “A wide angle lens always causes this-and-that ‘distortion’ at the edges,” is a fallacy which needs to be corrected, so that students of photography can stop having misunderstandings of [insert thing here] and stop avoiding them for [insert purpose here].

In this image, I used my raw processor of choice to somewhat alter the projection of choice, from the rectilinear projection of the 18-55mm lens, (at 18mm) to a more stereoscopic projection. This naturally resulted in a crop of sorts. It also made the horses legs appear a little “out of step” so to speak, but the rider's head no longer looks as weirdly elongated as the original. A different lens —such as the P-DA Fish-Eye 10-17mm ED (IF) at 17mm— may have given a better projection to start with, but I would not have been concerned with, “edge distortion due to the wide angle lens.”

Tim Ericsson's picture

It’s more about what someone considers ‘useful’. You’re obviously very interested (and I assume knowledgeable) about the particularities, but others can have very successful and rewarding lives as photographers without the pedantry.

If anything, this level of specificity might muddle real world application of the concepts. Knowing the effect certain lenses weilds is enough for many to produce images of value.

But you do you.

«…others can have very successful and rewarding lives as photographers without the pedantry.»
I agree.

«…this level of specificity might muddle real world application of the concepts.»
I agree.

«Knowing the effect certain lenses weilds [sic] is enough for many to produce images of value.»
I agree.

My issue is when someone teaches them something which is incorrect, and the misunderstanding causes them to avoid [insert thing here] for [insert purpose here].

Certain lenses —such as rectilinear vs stereoscopic projection— do wield certain effects, and that knowledge is enough. “Wide angle lenses causes edge distortion which makes straight lines near the edges of the sensor appear curved,” is not one of those things.

Tim Ericsson's picture

Sure that makes sense. Thanks for the clarification.

Matthias Kirk's picture

"but any lens with an aperture of 25mm will have the same DoF"
That's just wrong.

It is precisely right. Doubt it? Try it any of the good DoF calculator apps available on iPhone, Android, or Internet.

Your saying it is wrong does not make it so, especially when physics and almost every DoF calculator app says that it is right.

Matthias Kirk's picture

Yeah, I doubt it because the actual optical formulas state that DOF is a function of CoC, focal length, focus distance, and aperture and that none of these factors affect each other linearly.

But maybe I interpret these wrong? Let's check with the help of the good old DOF calculator on www.dofsimulator.net:

CoC 0.03mm, Distance 500cm, constant diameter of the aperture 25mm.

50mm, f/2: DOF=1.21m
100mm, f/4: DOF=0.59m

Pretty dramatic difference right?

Maybe if we keep the framing constant?
50mm, f/2, 2.5m: DOF=0,30m
100mm, f/4, 5m: DOF=0.59m


Saying "physics says that it is right" doesn't make physics say it's right.

Patience. I do not live on the Internet.

So, to clarify, I am speaking of a “"same image’ situation. I see that I did not make that clear in my last post, as I thought the context of the tread made it clear. On re-reading, maybe it was lost. Let me clarify what I was trying to get across.

When one considers out-of-the-box calculations, one is no longer considering “circles of confusions, but cones-of-confusions, from which these circles are derived. Likewise, we do not consider enlargement factors, because we are considering “perspective correct viewing distances.” It is because of out-of-the-box calculations, that lens manufacturers can place DoF scales on their lenses, as viewing distance and enlargement factors do not enter the equation.

My statement is based on achieving the same image —therefore same perspective, or distance— with different lens/sensor combinations, and both images enlarged to the same total width, and viewed from the same perspective corrected distance.

So, “they” say, that DoF is dependent on the ‘CoC’ —and they are probably referring to the “Circle of Confusion,” and not the “Cone of Confusion,”— the focal length, f, the focus distance, x, and aperture, —and by aperture, they probably mean the f-number— and you do not see how these are related? Okay, let us see how “they” did it.

At DoFSimulator dot net, they use the formula….. Wait…. Oh, they don't actually tell you. Well, that makes things difficult. …But we do know that, by looking at the DoF box, by, ‘CoC,’ they mean, ‘Circle of Confusion,’ introducing several rounding errors.

This means that their calculations must be based on sensor size, and enlargement size, and other factors (such as resolution), to figure out when a circle gets confusing. That already means that they are not using a ‘good DoF calculator,’ especially when they do not include viewing distance and enlargement factor. One can assume they are using the same final image size, say, 10×8 inches,* and the same viewing distance, say, 12 inches from the face,* but they don't actually say. They are therefore using in-the-box calculations. We can confirm this by simply changing the sensor size from 35mm to FT/MFT, and see the CoC reduced by a factor of 0.5.

*[These figures comes from the (alleged) “standard” enlargement size of 10×8 inch paper, and the 135 format frame which, when enlarged to a height of 8in, would fit 12in across, so viewed from 12in from the face. Other calculators use 25cm, which is about 10in from the face, (and a 10in width enlargement), to do the calculations. We do not know what DoFSimulator dot net uses.]

Secondly, from the lens box, we know that by, ‘aperture,’ they mean f-number. They also ask for focal length, and we know that the aperture —i.e., iris diameter— is dependent on the focal length, and the f-number, (or really, the f-number is dependent on the focal length and iris diameter). So already we have introduced approximations.

So, considering that this is NOT a good DoF calculator, let us do this again, at 5m, (a good portrait distance), and those two lenses, with aperture at approximately 25mm. Let us go with an f-type sensor, for the 100mm lens, and the MFT sensor, for the 50mm lens, so as to achieve the same enlargement amount, (different enlargement factors), to achieve the same image.

50mm @ f/2.0 gives DoF = 57.6cm
100mm @ f/4.0 gives DoF = 57.0cm

Oh, wow! Will you look at that? About a 10% difference only. Using a good DoF calculator, the DoF would be the same. But, to be sure, let us look at other DoF Calculators, and see if they can show us their formulas.

① DoF Master dot com.
There calculations ( http://www.dofmaster.com/equations.html) begin with calculating the hyperfocal distance, H. They then use that to calculate the minimum acceptable focus distance, Dn, and the maximum acceptable focus distance, Df.

Their Hyperfocal distance calculations shows dependence on the focal length, and the circle of confusion, but they do not state how they calculate the circle of confusion, only that it is dependent on the given CoC for the 35mm format size, (.030mm), and calculated for other sizes based on the “crop factor,” (a mostly useless term).

Since most places calculate the CoC for a 35mm format size based on an enlargement size of 10×8in, and viewed from a fixed distance of about 10-12 inches from the face, we see that they have simply ignored the perspective correct distance, and went with a fixed value. You and I both know that DoF is absolutely dependent on viewing distance and enlargement factor, so let us move on.

Their minimum acceptable focus distance is not only dependent on H, but also, once again, f, and the subject distance, s. (I use, x, but they use, s. I will stick with their variables here.) So now any errors involving, f, are increased proportionately, (not even considering the f²+f factor of the calculation of, H).

Their maximum acceptable focus distance in also similarly dependent on H, f, and s, increasing the errors in a similar fashion.

DoF is simply, Df-Dn, doubling the error again. To their credit, the f-number, N, is only used once (but in the calculation of, H, the top of the pyramid), and is calculated as actual powers of 2, and not the approximation used on most cameras/lenses.

Out-of-the-box calculations do not need the enlargement factor, film format size, nor viewing distance, because it uses the “Cone of Confusion” or the “Arc of Confusion,” which is what is used to calculate the cone of confusion in the first place. The arc of confusion, vertically, (or horizontally when using only one eye), is based entirely on the diameter of the pupil of the eye, and the radius of the eye. The first, for any given person, varies with light intensity, and they both vary from individual to individual. The value of 0.27' is taken from a mean radius and pupil size in an “adequately lit” room. (This is one of the many reasons why the lighting at galleries is carefully controlled).

Since the arc of confusion is what is used to calculate all the other factors except distance, and iris diameter, one can use the subject distance, s, to see that the subtended arc formed by two objects is only dependent on the distance away the two objects are, and the diameter of the iris.

I have to go now, but I will at some point later show a diagram, illustrating what I just said.

To answer your question, “Did I make a mistake, or do you stand corrected?” The answer is, neither. I erred by not making myself clear. I hope this clarifies everything. If it does not, the diagram coming later will make it all clear.

Matthias Kirk's picture

Sure if you change sensor size you change the DOF. Since focal lenght is (roughly) reciprocal to the DOF and the cropfactor in a linear relationship doublling one and halving the other will result in roughly the same DOF.

May I remind you what claim you tried to refute?

The article is talking about how a wide angle lens will produce a deeper DOF than a classical portrait lens, not about how you can use the crop factor to calculate lens equivalencies across different camerasystems.

M43: 50mm @ f/2.0 gives DoF = 57.6cm
35mm: 100mm @ f/4.0 gives DoF = 57.0cm

You know what you could also prove with this scenario by applying the same logic?

M43: 50mm @ f/2.0 gives field of view (diagonal) = 24.4°
35mm: 100mm @ f/4.0 gives field of view (diagonal) = 24.4°

Therefore the focal length doesn't affect the field of view. Wait, what?

But by all means present the diagrams. Will be interesting.

More comments