Feature summary
A new initiative to help determine the provenance of an image, called “Content Credentials” has recently emerged as one way to help detect AI or otherwise manipulated images. Image editing applications, AI-generation tools, and camera makers are implementing these credentials.
When an image that contains these credentials are uploaded to Wikimedia Commons, Commons should display this metadata alongside the image.
Use case(s)
One potential use case would be someone looking for an image to use on a Wikimedia project and they want to make sure, as best they can, that the image is an actual photo of what it claims to represent. Let’s say someone wanted to add a photo of a city, monument, or an example of something to a Wikipedia article. They search Commons, find a hundred photos of <thing>. Displaying the C2PA metadata would be one indicator of which of these 100 images are of the actual <thing> and not an AI-generated or otherwise manipulated image.
This basic concept would also extend to anyone wanting to use an image from Commons elsewhere, outside our projects. A student doing a report on a city and wanting to add an image of the actual city to their presentation, for instance.
Benefits
As AI-generated images are proliferating and the quality of said images increases, it is becoming more difficult to distinguish between a photo taken with a camera or a photo generated by a computer. Misinformation and disinformation is on the rise. The difficulty an average person has on determining what is “real” becomes more difficult. Displaying this metadata is one additional indicator of the providence of an image and would help people distinguish between an un-manipulated photo and a computer generated one.
The confusion around what is real and what is AI is leading to a distrust of, well anything, but in this instance, institutions – such as Wikipedia. If a well-meaning contributor were to add an AI-generated image to an article, and a reader discovered that the image was not of the subject being described, but computer-generated,that could erode trust in our projects. The same could be said of images found on Commons and used elsewhere. People will begin to distrust what they find and use on Commons.
See also
- Content Authenticity Initiative - https://contentauthenticity.org/
- Content Credentials - https://contentcredentials.org
- https://en.wikipedia.org/wiki/Content_Authenticity_Initiative
- https://youtu.be/qO0WvudbO04 A good explainer video on how this works and its caveats.