It seems each recent camera announcement has brought a higher megapixel count — including Sony's latest 60mp+ release. But whether you're shooting on 24mp APS-C, 50mp full frame, or 100mp medium format, you might not be getting all the resolution you paid for. Check out this list for 3 clarity-robbing problems and their fixes.
Debayering
Your sensor doesn't actually see the full range of color at every photosite. Instead, an array of colored filters combined with some clever interprolation transforms luminance and partial color information into a usable picture. This process is called debayering (coming from the Bayer filter typically used) or demosaicing. The process of debayering, particularly when applied to raw images with atypical sensor layouts, like Fuji's X-Trans, can produce different results from raw processor to raw processor.
Adobe Camera Raw, which powers Lightroom and Adobe Photoshop, has had a particularly bad history with Fuji's files, occasionally producing weird shapes in green areas. In response, Adobe has introduced a tool called Enhance Details, a new twist on their debayering process that happens to provide better results for all sensors. Now, this process isn't meant for every photo. It is significantly more computationally expensive, taking upwards of 15 seconds per image from my Z7, for example. For certain images, however, it is worth the wait.
If you have false colors appearing on fine patterns, called moire, or have seen the sometimes-weird results from your X-Trans sensor's files, give it a try. In Lightroom, while in the Library Module, right click the image, and select enhance details. A pop-up will show you the previewed results, as well as the expected render time. This tool can have any degree of effect, from basically unnoticeable to image saving, so check it out with a couple of images before dismissing it outright.
Random Noise
A technique already known to astrophotographers and users of Sony's pixel shift, stacking images can greatly improve resolution and noise performance. Fortunately, you don't need to switch brands or buy a telescope to get the same results.
To successfully stack, all you need to do is shoot multiple images without moving the camera. These additional samples can then be combined in Photoshop to improve the base image. For best results, try this on an image where you're using a high ISO, without much movement in the frame. If you have a steady hand and a high framerate camera, you can shoot your images without a tripod, but locking your camera down will guarantee easy alignment.
Once you've got your set of images opened in a stack in Photoshop, align them by selecting all the layers, then using edit> auto-align layers. From here, I like to duplicate those layers with Control+J, then merge to smart object by right clicking on the selected layers. In the Layer menu, select smart objects>stack mode>median.
This will take a few seconds to process, but will result in Photoshop selecting the middle value for each pixel, from the set of layers. What this means in practical terms is a significant reduction in noise. While it won't get rid of truly hot pixels, which are stuck on, or dead pixels, which are completely black, it improves the appearance of the image.
You can think of noise as a range of values, distributed around the actual value. With more samples, or images, you can better determine the actual value that pixel should be. In any individual frame, especially at high ISO, that photosite may have received slightly more or less light, leading to the random patterns of noise. With that in mind, you can see how stacking can be effective with just a few frames, but improves with more frames. For typical subjects at high ISO, I've seen diminishing returns at about 8 frames, but I know for deep sky images, hundreds of frames can be used.
While this technique is especially well suited to use at high ISO, the same process can be used to improve very fine detail in normal ISO images. Since even locked down on a tripod, your camera may shift a tiny amount between shots, you can imitate the effects of Sony's Pixel Shift. This doesn't have the exact same performance gain, however.
Shake and Bake
At higher resolutions, the impact of shake becomes more apparent when viewing images at 100%. While new cameras are deploying ever more advanced image stabilization systems, there is still something to be said for proper technique.
Shooting handheld can introduce a number of issues. They are all related to having too low of a shutter speed: unsteady hands, being overly reliant on VR (I'm guilty of this one), or raising the ISO too high to get a workable shutter speed. The solutions for this will depend on the situation, and it may not even be possible to solve, like when shooting a concert from the pit. Potential solutions include adding light via flash, raising the ISO, being able to brace yourself against something, or deploying a tripod/monopod.
Even when you're on a tripod, however, all the problems aren't solved. You can observe the effects of an unsteady tripod or head, mirror slap, or residual shake from activating the shutter. To address the camera based causes, try using mirror lockup, a shutter delay, a remote trigger, or an electronic front curtain. For tripod issues, besides getting sturdier gear, you can try hanging your bag from the center column to provide some damping.
All these factors together mean that to get the most performance out of a high megapixel camera, you have to really emphasize technique and discipline in taking the shot. That can mean dozens of things to check for each frame, and can really slow down your process. If you have the time and dedication to put each of these best practices into use, you can be virtually guaranteed the best performance out of your camera. But we aren't perfect, and I can't imagine many situations that will call for each of these techniques to be used.
Instead, try to be aware of these things, and double check them the next time you notice a shot isn't coming out as sharp as it should be. Some techniques, like stacking, can produce dramatic improvements when used appropriately. Others, like the Enhance Details option and mirror lockup, are more niche, but together, they can all help build your technical skill as a photographer, and squeeze every last bit of performance from your camera.
Also with Stacking you can Drizzle X2 or X3 or even more you're images. So you can double or triple the resolution, even with my Sony Sii of 12Mp I'm getting 51Mp !!
Stacking is just great, except the moving objects!
Great point. Do you use Deep Sky Stacker's implementation or another program?
Yes Alex with DSS you can do Drizzle X2 so you got the double size of pixels Vertical and horizontal.
Just use the latest 64bit version (the 32bit is crashing in this size of pixels).
-Here is the size of a Drizzle picture from a Sony Sii 12Mp
Can you put that in English, please? What are Drizzle and DSS?
Hi Jacques - Drizzle is an option offered in DSS (Deep Sky Stacker).
Deep Sky Stacker is a freeware software tool aimed at the astrophotography community which enables photographers to align and stack series of images to produce a cleaner (less noise, for example) final result.
Drizzle is almost the same way as some cameras stack and upscale their pictures to Extreme size pixels.
Here some more info, also not only Dss, but lot of Astronomy image apps can do that, I think Pixinsight can take it way to big.
https://en.wikipedia.org/wiki/Drizzle_(image_processing)
I suppose the most detail we can get is from a monochrome sensor - the 18MP Leica Monochrom could resolve detail about on par with a D800E or D810 - so I'm guessing we lose about half our spatial resolution (if we define resolution by detail, not just pixel count) to bayer interpolation. Probably about the same with X-Trans. Foveon is a different story and punches far above its weight class, but only at very low ISOs. Monochrome would resolve more detail and should be better at higher ISOs (assuming all other tech is the same).
Most people are losing "resolution" to poor technique, though. Once I got a D810 (my first camera above 24MP), I started to shoot very differently - a lot more tripod, mirror-up, self timer or cable release. At least 1/2x focal length for handheld, preferably 1/3x. These higher density sensors are really taxing not only on lenses but your technique, and they quickly show if its poor. (note that it's not pixel density itself that matters, but pixel density per degree of view - a 24MP Full frame and 24MP APS-C would require the same shutter speeds if the FOV is the same)
My work has definitely shifted from run-and-gun to more deliberate handheld work or tripod work. That has not only improved the technical quality of my photos, but also made me a better photographer in general. Allows me to take my time and know exactly what I want and then execute that with the best IQ I can get.
It's also costed me a lot of money upgrading my lenses.... (though don't get me wrong - any modern camera will produce great photos.... I use my iPhone all the time now because I can shoot DNG raw files in Lightroom Mobile, with manual controls, and then edit them in ACR/Photoshop on a computer later)
Sounds like we went through a similar realization with the shift to higher MP sensors. I was previously using a D3s at 12mp, so the jump to a D800 really showed some errors in my technique.
I was using a D700 before the jump to D810, so pretty much the same. I had bought a Sony a7 but didn't get along with that at all so I sold that. So most of my work was 12MP (if it was Full frame anyway).
Add sensor clean to the list esp if you don’t do it somewhat often.
Air is dirty and you will see improvement with recently cleaned sensors.
Agreed. Surprisingly, I haven't seen nearly as much sensor dust on my Z7 as compared my D810. Not sure if it has to do with the shutter implementation or similar, but it has stayed remarkably clean.
If you are cleaning the sensor, be very careful - a lot of new cameras seem very delicate between coatings and the sensor stabilizing mechanism.
The Z cameras seem to have very effective in-camera sensor cleaning.
I've never had to clean my Olympus in over five years. I had to clean Sony cameras all the damn time. No idea what the difference is.
Supposedly, Nikon locks the sensor down when the camera is off, so it should be safe to clean, but I'd still be careful - especially with wet cleaning.
Appears to be more than sensor dust. Not all sensor coating is same. That is what I was implying. I think you may also get a haze that affects the IQ. Almost any surface will get an oxide coating when exposed to oxygen.
May be more manufacture specific due to exact coating used.
Think about it ...
It always bothered me why Canon, Nikon, and Sony users were always on about the best methods to clean their sensors, while I never really ever cleaned mine. I, at first, attributed it to the excellent “self-cleaning” (dust-removal) feature, and then to the weather-sealed nature of my kit.
I was actually more surprised when they insisted that the camera must always be pointing down when changing the lens. (Traditionally, I always had the rear lens group facing down to keep it dust-free). How shocked I was to find out that some technician/engineer thought that it would be a great idea to have the normal rest position of the camera be with the shutters in the open position!!!
To what benefit? Even if you have a MILC with no OVF, why would the shutter remain open when the camera is off, or during lens changes?
Nevertheless, cameras where the default position is to have the shutter closed, tend to require less sensor cleaning. There is a shutter, (and, in the case of DSLRs, a mirror), between the dirty outside air and the sensor, but nothing between a lens’s rear element and the elements —pun definitely intended— during lens changes.
One can argue that a lens is easier to clean than a sensor, but an unprotected lens is easier to soil than a protected sensor.
If lighting is optimal, shoot with a Foveon sensor based camera for the best cost effective color accuracy and detail.
I've seen some good results out of the Foveon style sensors. I wish there was more widespread adoption. Phase One's Trichromatic sensor is also phenomenally accurate, although that has more to do with all the links in the image capture chain being high quality.
I think one of the things that rob amateurs of resolution is using low-quality lenses. Good glass really does make a difference. It doesn't make a lot of sense to use a higher resolution sensor if your glass only resolves to 10mp perceived sharpness.
Agreed. I think that's why so many people are impressed when they get their first 50mm prime. Typically it's their first fast and high quality lens, and is clearly higher performing than their kit lens.
I first had only aps-c lenses. I thought it was good quality until I changed to FF and bought top glass... Hell that was an awakening... ;)
Sharp light = sharp images. Soft light = soft images.
So, what are you going to do with all these high resolution, super sharp, 24+ megapixel images that you don't print?
Bold assumption that I don't print - have you seen my walls?
I can't believe no one thought of this. If you are shooting in the dark with a tripod. You don't need to take a bunch of photos and stack them to reduce noise. Just take a long exposure at low ISO. Or am I missing something?
I gave you a thumbs up, but yes, you are missing something.
If the subject is moving, like stars, then one either has to use AstroTracker™ on Pentax, or some special astro-tracker tripod attachment.
Otherwise, yes. Quite a valid solution.
For some subjects, that works great. If you have star movement, a breeze blowing through the trees, or are shooting without a shutter timer, you may need to apply the technique. Even at low ISOs, you can still benefit.
§
De-Mosaicing
§
It is not called, “de-Bayering,” it is called, “Bayer transform approximation,” (BTA), with Bayer color filter array sensors, and “color filter array transform approximation,” (CTA), generically.
Even the Foveon® sensor, although it catches the full intensity of light at every pixel, needs a colour transform approximation for colour in the shadow detail. This is due to the sensor being less sensitive to light detected at the bottom of the stack, so the actual color has to be approximated based on light intensity.
§
Pixel Shift
§
There are two ways that PixelShift™ can be done; the first method, tripod PixelShift, stacks four colours, (RGBG) on each pixel, (without an anti-aliasing filter), effectively giving full colour & intensity information at each pixel, & removing the need for any form of transform approximation, while the second method, handheld PixelShift, creates an array image, effectively increasing the resolving power of the lens.
Both methods increase detail AND reduce Shot noise, without increasing the nominal pixel count, so an (x)Mpx sensor will produce an (x)Mpx PixelShift image.
There is a third way off doing pixel shift, á la Hasselblad, Olympus, and Sony, where the sensor moves in half-pixel increments, creating an image of four times the nominal pixel count, so an (x)Mpx sensor will produce a (4x)Mpx pixel-shift image. This image will, however still need a CTA, since not all pixels get all colour/intensity information, and the image will effectively have an AA filter, since each pixel sees some information from its neighbour.
Still, a (4x)Mpx BTA image with AA filter is arguably better than an (x)Mpx BTA image without.
§
High Exposure Index with Averaging Stacking (Tripod)
§
A better alternative, is low EI with additive stacking, (or simply, long exposure). This reduces analogue (and digital) amplification, with its associated noise gain, by simply adding each exposure to the previous one.
The only reason† to do average stacking, (as opposed to long exposure), is for a moving scene, (such as astrophotography), or lack of a tripod. In either case, using a high EI is not necessary, and may introduce unnecessary noise gain, or detail-robbing, in-camera NR.
†Another reason would be to emulate a long exposure, but such emulation is usually done at the lowest possible EI anyway, to maximise exposure time.
§
Mirror Slap vs Shutter Shake
§
Mirror slap is often mitigated by an exposure time at magnitudes above the minimum flash synchronization time. This is because the longer the exposure, the more of it occurs after the vibration has been dampened.
Shutter shake,OTOH, is often mitigated by an exposure time at fractions of the minimum flash synchronization time. This is because significant shutter shake usually does not occur until after the first curtain has fully opened. At exposure times close to X-sync, both issues might rear their ugly heads.
On most modern cameras, unless one is using a long telephoto lens, or doing macrophotography, neither of these issues are much of a problem. Both mirror slap and shutter shake have been greatly dampened. The biggest issue is camera shake from actuating the shutter directly. A timer or remote is often enough, (with a sturdy tripod on a firm floor).