Content Credentials (Probably) Won’t Save Photojournalism

Content Credentials (Probably) Won’t Save Photojournalism

If an editor sent a photographer out to get some "weather art" to show a hot summer day, the photo above of a child playing in a sprinkler park would probably make a good image for the newspaper, right?

Look again.

The photo above was manipulated using Adobe's generative AI feature in the latest version of Photoshop. All it took to change the photo was a little bit of lasso tool and a prompt that said "remove person in background." In actuality, the photo looked like this, with another child behind the stream of water:

The main photo without any edits made with AI.
The ease with which these kinds of edits can be made pose a real danger to photojournalism, whose stock-in-trade is truthful images. Already, prominent photo contests are falling victim to staged photography, and when elements of photos can be easily removed or added in seconds with generative AI, it does not bode well for the future of truth in imaging. In the past, photojournalistic frauds such as the Toledo Blade's Allan Detrich would composite balls and remove feet and other items from photos, but it took a lot of effort. Not the case anymore, and there's little way to track such changes.

What Are Content Credentials?

There's a new camera on the block that supports a new standard called "Content Credentials." That camera is the Leica M11-P, an almost $10K rangefinder-style camera. There's a lot of technical bits behind the standard, but the short version is that by using framework from the Content Authenticity Initiative and a standard called Coalition for Content Provenance and Authenticity (C2PA), the camera itself can embed secure metadata directly into the file at the point of creation. Canon, Nikon, and Sony are also all involved with the CAI, but they have yet to release a camera that has these credentials built in.

This would pair with social media sites adding a content credentials icon alongside images so that users can track the image's provenance and edits. On the surface, all of this sounds great.

The problem is two-fold. First, while Leica has embraced open-ish standards in the past, such as Adobe's raw format, DNG, other manufacturers have generally gone their own way. It's a good PR move for camera companies to join such initiatives, but actually doing the work to implement a feature into cameras that a very limited pool of photographers will use is another thing entirely.

And that's the biggest issue here: The world at large doesn't seem to care about authenticity in imaging. Photos are mixed, remixed and remixed again when posted and reposted to TikTok, Instagram, Facebook, etc. I doubt most teenagers or twenty-somethings on these platforms care about the authenticity of a photo. If an editor at the New York Times needs a cell phone photo from someone on the scene of an event, they're going to probably have to look the other way when the submitted photo doesn't have content credentials because the ordinary person taking the photo wasn't using an app that supports it. It also seems to add a lot of work to the workflow to add this authentication later.

All of that is a shame, because the inclusion of this feature on an actual production camera should be bigger news than it has been. Truly, for this to work, all of the major manufacturers and software developers need to be on board and not just pay lip service to the tech by joining a coalition.

If you're curious to learn more about how the technology works, there's a great explainer video here.

What do you think about content credentials? Is it an important feature to you to have in a camera?

Wasim Ahmad's picture

Wasim Ahmad is an assistant teaching professor teaching journalism at Quinnipiac University. He's worked at newspapers in Minnesota, Florida and upstate New York, and has previously taught multimedia journalism at Stony Brook University and Syracuse University. He's also worked as a technical specialist at Canon USA for Still/Cinema EOS cameras.

Log in or register to post comments
8 Comments

Right now, Content Credentials is just a way for media outlets to reduce the risk of a photojournalist committing fraud by submitting an editing image as genuine. That's important, because it has been a real problem in the past, and we don't even know how frequently that happened. Of course, PJ's can still stage photos, just as they always have.

This become practical and meaningful when Apple deploys it to iPhones, and iMessage, Instagram, Facebook, TikTok, etc., start showing a check mark on genuine verified content. I believe that'll happen in the next couple of years and it'll get the ball rolling.

I think a year or two ago I tried to see if you could link an image to a specific phone/serial number and discovered that it was not possible to do so. Perhaps for privacy reasons? I would think that Apple might shy away from this for the same reasons in the default camera app that everyone uses. I could see a third-party app adding this kind of functionality but I have my doubts we'll ever see it integrated in any meaningful way by Apple. But you are right, in that until it gets integrated thoroughly into phones and social media, it won't get off the ground.

One issue is that such a system still relies on trust. For example, C2PA video showing altered footage as genuine output directly from a camera, simply by feeding the edited content into the raw buffer of the camera and having it write it with its with its C2PA signature. Manipulating the memory of a camera is a common practice when developing firmware mods for a camera. Or someone providing false authentication such as when people use some of those Shenzhen market HDMI recorders that will capture video and audio over HDMI by tricking HDCP into thinking it is outputting directly to a TV.

For systems like that, all it takes is for one person to have it verify something altered as as being a genuine unaltered capture from a camera,and the entire system loses trust.

Furthermore, such systems have been tried in the past for various purposes, and all have failed due to costs, and paywalling, as companies would charge to access the info, or charge steep fees to use their service, thus placing the service in a position where it has no unique claims. For example, C2PA would need to essentially trick the public into thinking that every image not using their system is fake or maliciously manipulated. If they cannot do that, then they will be charging for something which the general public has free convenient workarounds.

Beyond that, the system still relies on trust, where even if the process has not been cracked yet, you need to trust that the "photographer" didn't generate an image, then capture an image of that generated image using a camera.

Finally, a system like this has a strong potential to adding false validity to fake or maliciously altered imagery as soon as someone successfully spoofs or fakes an initial camera capture.
Imagine making a fake or altered image of someone, depicting them doing something bad, then finding a biased outlet to run a hit piece based on it. They will be able add some plausible deniability, by stating "here is an image showing John Doe engaged in felonious behavior, and according to C2PA, the image was not tampered with".

I think this is getting towards the realm of info-sec in general, in that no system will ever be perfectly secure. This is probably going against the grain of popular sentiment in my field of tech, but I think the solution to the firmware mods you speak of is to have the manufacturers sign the mods. It would not be unlike the way Apple signs applications. It would not be perfect, but, we have a pretty good idea that when you buy an app from the App Store, it's 98% probably not going to root your phone.

The fact that a fair chunk of the general public doesn't care (for things like social media) does not necessarily matter. What matters is that a strong enough demographic exists for verified content, and (hate to say the word) there exists a way to monetize that market. Google and FB haven't killed off papers like Der Spiegel and the New York Times, so I think such a market exists.

Content fraud in photojournalism is bad enough but what is/will be worse is evidence fraud using altered images during criminal and civil suit proceedings. The abuse of AI imaging both in stills and video is just starting. Imagine what it will be like in the future if unchecked.

I agree with you that evidence fraud is going to be a huge issue in the future. Any defense will just have to say "how do we know that wasn't generated/manipulated with AI". But what is really scary is how easy it is/will be to use AI to frame innocent people. Think of if, how hard would it to be for a parent in a divorce case to generate AI images of the other parent abusing their child.

Wasim, you nailed exactly what the problem is with AI by this statement "And that's the biggest issue here: The world at large doesn't seem to care about authenticity in imaging." And when the world finally does realize it should care about authenticity, a lot of damage will already be done.

Having just gone through a divorce, I'm really glad it was done before AI imaging was really a thing. That's a scary thought.