A few years ago, Adobe introduced an alternate way of processing images that could help reduce artifacts. In the intervening years, much of the benefits have faded away, until now. Can this technique find a new use in processing images from non-Bayer sensors?
Raw files give the photographer a ton of information to work with. Every step of processing a raw file can have significant implications for the final image’s quality. While some raw converters, like RawTherapee, offer many different ways of performing demosaicing, Adobe Camera Raw, which powers Photoshop and Lightroom’s raw processing, has not presented the same degree of choice to users.
That changed a while back, with the introduction of Enhance Details. With Enhance Details, users could run their raw files through an alternate pipeline, which was supposed to “produce crisp detail, improved color rendering, more accurate renditions of edges, and fewer artifacts.”
I've always liked the idea of Enhance Details: trading off some processor time for an improvement in image quality. No matter how slight, it was always worth it, since a few extra seconds in post means nothing for an image I might spend the next 20 minutes editing. That math has changed over the last little while, however. On the most recent updates of Lightroom and ACR, as well as the latest generations of cameras, that slight benefit has all but disappeared.
One of the easiest examples is visible in an old Fuji X-T1 shot. With their quirky X-Trans sensor, these bodies benefited the most from extra care when processing the raw files. In this sample, you can see the better color performance and slightly improved edge detail. Again, these weren't processing changes that were going to revolutionize how your camera worked, but instead offered a small improvement at no cost.
On a more recent shot, like an image from my Z 7, there's virtually no improvement. In a few spots, I can see where it's just made a slightly different decision about how to represent a texture, but there's no meaningful improvement. I'm not sure if this is due to Adobe bringing over processing improvements into regular ACR, camera and imaging pipeline changes, raw format changes, or something else; there are too many pieces to say for sure. In the end, however, it doesn't really matter. For most of the cameras I use, there's not much benefit to the adjustment.
Something Old Returns
I recently got a new drone. The Mavic Air 2 uses a very interesting setup. Instead of a traditional sensor layout with one color per photodetector, a single color filters sit over subgroups of four. This means the Sony sensor is nominally 48 MP, but typical shots are binned down to 12MP, combining those four photocells into one unit. While this can offer HDR benefits for video, for photos, it makes for a very unusual demosaicing process, compared to most other sensors.
I noticed very prominent moire in my first few test shots at 12 MP. Blown up below, you can see the false colors appearing along the fence. These patterns, despite being small in the overall image, are a pretty ugly artifact.
From my experiences with Fuji's oddball sensors, I thought I'd give Enhance Details a try. I was really quite surprised. With Enhance Details, the false colors were knocked right out, without any loss to acuity. In fact, at little spots throughout the frame, there were fewer artifacts and generally more consistent colors. On top of that, it was a very fast process, taking maybe three seconds to process the frame on a 3700X and RTX2070.
Why Not 48 MP?
Interestingly, DJI gives users the option to shoot the sensor at its "full" resolution of 48 MP. Without binning, would the situation be better? To test it, I put the drone up and grabbed a few shots at 12 MP and 48 MP.
Overall, the 48MP files had less issues with false color, but just had a generally unpleasing "blockiness" or "worminess" at higher zoom levels when viewed at the native resolution. Resized to 12 MP, they had better acuity than the native 12 MP shots without introducing any false color. Compared to the 12 MP shots, the 48 MP shots had more noise in the shadows, so it wasn’t just a straight upgrade.
Lastly, let’s take a look at the 12 MP shot processed through Enhance Details. Compared to the 12 MP shots, the moire is gone. Compared to the resized 48 MP shots, the image is cleaner, with less noise, and a roughly equal level of acuity to fine patterns.
At least in this implementation of the quad-Bayer sensor, there’s not much benefit to actually shooting at 48 MP. Between the longer shutter delay, increase in artifacts, and worse noise performance, you can easily hit the same performance via some smart processing. A simple upscale of the “Enhance Details” version of a 12 MP shot is perfectly competitive, while adding a bit of sharpening might even make it look better than the full-resolution version.
These are pretty tiny details in the overall scheme of things, but it's an interesting result nonetheless and one that will certainly inform how I plan to use this camera going forward.
What This Means for any Photographer
This is just one instance of one type of specialty camera, but I believe it reflects a broader trend in photography. Increasingly, lens and camera manufacturers are going with the fix-it-in-post strategy. What I mean by that is they're deprioritizing aspects of the physical camera that can be made up for in software.
In the drone’s case, it’s limited by size and cost constraints; you can’t hoist a full frame sensor and lens on a couple-hundred-dollar consumer drone. For many new camera lenses, it’s uncorrected vignetting and distortion, both of which are relatively easy to fix in post-processing. Across the industry, it’s taking the form of software developments, with things like computational imagery serving as the major notable feature in iPhones.
It’s not necessarily a bad trend, but rather one to be aware of. Post-processing has always been important and an essential step of creating an image, even since the darkroom days. Now, it’s important to be informed of these latest developments to make sure you’re getting the most from your equipment. Knowing what you can and can’t accomplish in post is becoming just as important of a skill as knowing how to dial in settings in the field, as that digital envelope has expanded and gotten more complex.
You haven't mentioned how you've processed your Mavic RAW files. ACR uses the embedded image profile, which can't be turned off. The "wormines" you see is from the aggressive noise reduction and sharpening. To see the true RAW images you should use Pixinsight or other RAW processing software which does not take into account the embedded image profile. Here is a comparison between a JPEG image that came directly from the drone and a RAW file of the same shot processed in ACR and Pixinsight. What is very surprising to me is how well the JPEG looks compared to RAW. Even in the middle of the image frame, there is some kind of halo around the while lines present in the RAW images, which is not seen in the JPEG. If you look at the extreme left side of the frame, you'd notice the embedded RAW profile is even more cropped than the JPEG and the chromatic aberration is even worst than the 'true' RAW file. I wonder if DJI is factory calibrating each individual camera and it does some additional processing (deconvolution maybe) before saving the JPEGs in order to get rid of the halo and the other lens artifacts which may be very unique for each individual unit. If that's the case, I'd just use the JPEGs instead of RAW.
Many new cameras are built with the expectation that you'll be correcting for lens issues in post - for instance, vignetting on the Z 14-30mm. I'd expect the same to apply to the Mavic Air 2's camera as well, considering the design compromises that must be involved in a camera of that size.
The crop is from that lens correction that you mentioned - it's not something I'm particularly worried about. If we're talking about quality concerns, the top of the white SUV next to the van shows that the pxiels being cropped out by lens correction are not of quality to begin with.
As for my comparison, I'm just processing through ACR with default settings, as it was meant to compare two methods of demosaicing, rather than compare JPEG to raw, or 1 raw processor to another. For the smaller portion of my work that this camera would constitute, it doesn't make sense to completely change my workflow by going to another raw processor.
I'll be curious to see if subsequent ACR updates improve the appearance of these files - this type of sensor is becoming more popular with cellphones, and I know Lightroom CC has made a big push for mobile use.
Yes, indeed. ACR is the preferred tool I use and I just wish there was an option to disable the embedded image profiles. I build a lot of 360-degree aerial panoramas and ideally, I'd like to have images with no lens corrections applied in order to avoid manipulating pixels twice.
If you're comfortable with a tiny bit of CLI work, you can use ExifTool to remove that lens correction flag. Here's more info: https://community.adobe.com/t5/camera-raw/deactivate-integrated-lens-pro...
this works to https://triplegangers.com/index.php/blog/cat/technology/post/linear-pipe...
when will pepole learn that the "wormines" is not from noise reduction and sharpening its from LR/ACR demosaicing. and even if you turn OFF ALL the noise reduction and sharpening in LR/ACR then sharpen it in Photoshop the "wormines" will came back.. :/
It's definitely related to the demosaicing process, as compared to a sharpen/NR step. The way sharpening and NR act on the underlying pixels is really easy to understand, and there's no mechanism to explain how either of those would manifest these patterns across tens of pixels.
but LR´s noise reduction and sharpening has Always benn SUPER bad..
It is not only LR/ACR demosaicing, but any tool that uses VNG method to debayer the images.
so lighroom uses VNG method to debayer the images?