Skip to main content
The Keyword

How we’re increasing transparency for gen AI content with the C2PA

an illustration of intersecting lines on a white background forming multiple squares, some are blank and some are filled with criss-crossing lines, with the word "Google" on the bottom right

As we continue to bring AI to more products and services to help fuel creativity and productivity, we are focused on helping people better understand how a particular piece of content was created and modified over time. We believe it’s crucial that people have access to this information and we are investing heavily in tools and innovative solutions, like SynthID, to provide it.

We also know that partnering with others in the industry is essential to increase overall transparency online as content travels between platforms. That’s why, earlier this year, we joined the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member.

Today, we’re sharing updates on how we’re helping to develop the latest C2PA provenance technology and bring it to our products.

Advancing existing technology to create more secure credentials

Provenance technology can help explain whether a photo was taken with a camera, edited by software or produced by generative AI. This kind of information helps our users make more informed decisions about the content they’re engaging with — including photos, videos and audio — and builds media literacy and trust.

In joining the C2PA as a steering committee member, we’ve worked alongside the other members to develop and advance the technology used to attach provenance information to content. Through the first half of this year, Google collaborated on the newest version (2.1) of the technical standard, Content Credentials. This version is more secure against a wider range of tampering attacks due to stricter technical requirements for validating the history of the content’s provenance. Strengthening the protections against these types of attacks helps to ensure the data attached is not altered or misleading.

Incorporating the C2PA’s standard into our products

Over the coming months, we’ll bring this latest version of Content Credentials to a few of our key products:

  • Search: If an image contains C2PA metadata, people will be able to use our "About this image" feature to see if it was created or edited with AI tools. "About this image" helps provide people with context about the images they see online and is accessible in Google Images, Lens and Circle to Search.
  • Ads: Our ad systems are starting to integrate C2PA metadata. Our goal is to ramp this up over time and use C2PA signals to inform how we enforce key policies.

We’re also exploring ways to relay C2PA information to viewers on YouTube when content is captured with a camera, and we’ll have more updates on that later in the year.

We will ensure that our implementations validate content against the forthcoming C2PA Trust list, which allows platforms to confirm the content’s origin. For example, if the data shows an image was taken by a specific camera model, the trust list helps validate that this piece of information is accurate.

These are just a few of the ways we’re thinking about implementing content provenance technology today, and we’ll continue to bring it to more products over time.

Continuing to partner with others in the industry

Establishing and signaling content provenance remains a complex challenge, with a range of considerations based on the product or service. And while we know there’s no silver bullet solution for all content online, working with others in the industry is critical to create sustainable and interoperable solutions. That’s why we’re also encouraging more services and hardware providers to consider adopting the C2PA’s Content Credentials.

Our work with the C2PA directly complements our broader approach to transparency and the responsible development of AI. For example, we’re continuing to bring SynthID — embedded watermarking created by Google DeepMind — to additional gen AI tools for content creation and more forms of media including text, audio, visual and video. We’ve also joined several other coalitions and groups focused on AI safety and research and introduced a Secure AI Framework (SAIF) and coalition. Additionally, we continue to make progress on the voluntary commitments we made at the White House last year.