Statement 8: Apply watermarking techniques
AI watermarking can be used to embed visual or hidden markers into generated content, so that its creation details can be identified. It provides transparency, authenticity, and trust to content consumers.
Visual watermarks or disclosures provide a simple way for someone to know they are viewing content created by, or interacting with, an AI system. This may include generated media content or GenAI systems.
The Coalition for Content Provenance and Authenticity (C2PA) is developing an open technical standard for publishers, creators, and consumers to establish the origin and edits of digital content. Advice on the use of C2PA is out of scope for the standard.
Agencies must:
- Criterion 22: Apply visual watermarks and metadata to generated media content to provide transparency and provenance, including authorship.
This will only apply where AI generated content may directly impact a user. For instance, using AI to generate a team logo would not need to be watermarked. - Criterion 23: Apply watermarks and metadata that are WCAG compatible where relevant
- Criterion 24: Apply visual and accessible content to indicate when a user is interacting with an AI system.
For example, this may include adding text to a GenAI interface so that users are aware they are interacting with an AI system rather than a human.
Agencies should:
- Criterion 25: For hidden watermarks, use watermarking tools based on the use case and content risk.
This includes:- including provenance and authorship information
- encrypting watermarks for high-risk content
- using an existing tool or technique when practicable
- embedding watermarks at the AI training stage to improve their effectiveness and allows additional information such as content modification to be included
- verifying that the watermark does not impact the quality or efficiency of content generation, such as image degradation or text readability
- including data sources, such as publicly available content used for AI training to manage copyright risks, and product details such as versioning information.
Criterion 26: Assess watermarking risks and limitations.
This includes:
- ensuring users understand there is a risk of third parties replicating a visual watermark and to not over rely on watermarks, such as sourcing content from external sources
- preventing third-party use of watermarking algorithms to create their own content and act as the original content creator
- consider situations where watermarking is not beneficial. For example, watermarking can be visually distracting for decision makers, or when its overused in low-risk applications
- consider situations where malicious actors might remove or replicate the watermark to reproduce content generated by AI
- managing copyright or trademark risks related to externally sourced data.