In a move to help speed up the company's workflow and to supposedly stamp out severe editing, Reuters now not only requests only JPEG images, but even mandates that images not be originally altered from a raw file. How they can verify this is unclear (metadata and other types of data about the photo might give experts better hints), but the move is supposed to also help maintain ethical photojournalism practices by reducing one's ability to alter a photograph so much that it would change its meaning.
Undoubtedly, with as many images as Reuters must process through its servers, they would certainly be thankful for JPEG submissions to save transfer time. But they — as with most news agencies — are likely already require JPEG submissions for this reason. They simply did not mandate whether or not submitting an image edited from a raw file was allowed until now.
Reuters' assumption that a JPEG image cannot be altered in a way that would make the photograph any less true as easily as a raw image gets into sticky territory. According to The Verge, New York Times Director of Photography and World Press Photo Jury Chairperson Michele McNally said, "A large number were rejected for removing or adding information to the image, for example, like toning that rendered some parts so black that entire objects disappeared from the frame. The jury — which was flexible about toning, given industry standards — could not accept processing that blatantly added or removed elements of the picture."
While adding elements to a photograph that might not have been as easily visible in the original file is certainly easier to do when editing the raw file (due to the greater exposure latitudes in the shadow areas in this case), elements of an image can still easily be "removed" by darkening them from a JPEG image. In addition, it's difficult to argue that an element of an image should specifically not be brought out in postproduction. Even if the camera cannot naturally "see" the object, if that object is "seeable" with editing, it was still recorded by the camera. And it was still potentially seen with the human eye. At what point do we let the limits — or, alternatively, abilities — of our cameras dictate what did or did not exist in a scene? What if one camera can "see" a dark object that another cannot? Do we err on the side of inclusion or exclusion?
Naturally, the obvious answer would be to err on the side of inclusion: unless you severely Photoshop objects into a photo, you can't really lie by including as much real information as possible by simply increasing your shadows/blacks. Alternatively, you could easily "lie" by excluding a key piece of information shrouded in darkness.
What seems like an honest effort to increase accountability and the reliability of photographic information in fact seems to hamper photographers' abilities to recover from tricky lighting situations or miscalculations of exposure more than serving its intended purpose. What are your thoughts?