AI watermarking can be used to embed visual or hidden markers into generated content, so that its creation details can be identified. It provides transparency, authenticity, and trust to content consumers.
Visual watermarks or disclosures provide a simple way for someone to know they are viewing content created by, or interacting with, an AI system. This may include generated media content or GenAI systems.
The Coalition for Content Provenance and Authenticity (C2PA) is developing an open technical standard for publishers, creators, and consumers to establish the origin and edits of digital content. Advice on the use of C2PA is out of scope for the standard.
Criterion 22: Apply visual watermarks and metadata to generated media content to provide transparency and provenance, including authorship.
This will only apply where AI generated content may directly impact a user. For instance, using AI to generate a team logo would not need to be watermarked.
Criterion 23: Apply watermarks and metadata that are WCAG compatible where relevant
Criterion 24: Apply visual and accessible content to indicate when a user is interacting with an AI system.
For example, this may include adding text to a GenAI interface so that users are aware they are interacting with an AI system rather than a human.
Criterion 25: For hidden watermarks, use watermarking tools based on the use case and content risk.
This includes:
Criterion 26: Assess watermarking risks and limitations.
This includes:
Criterion 27: Define the problem to be solved, its context, intended use, and impacted stakeholders.
This includes:
Criterion 28: Assess AI and non-AI alternatives.
This includes:
Criterion 29: Assess environmental impact and sustainability.
Developing and using AI systems may have corresponding trade-offs with electricity usage, water consumption, and carbon emissions.
Criterion 30: Perform cost analysis across all aspects of the AI system.
This includes:
Criterion 31: Analyse how the use of AI will impact the solution and its delivery.
This includes:
Criterion 32: Identify human values requirements.
Human values represent what people deem important in life such as autonomy, simplicity, tradition, achievement, and social recognition.
This includes:
Criterion 33: Establish a mechanism to inform users of AI interactions and output, as part of transparency.
Depending on use case this may include:
Criterion 34: Design AI systems to be inclusive, ethical, and meets accessibility standards using appropriate mechanisms.
This includes:
Criterion 35: Define feedback mechanisms.
This includes:
Criterion 36: Define human oversight and control mechanisms.
This includes:
Criterion 37: Involve users in the design process.
The intention is to promote better outcomes for managing inclusion and accessibility by setting expectations at the beginning of the AI system lifecycle.
This includes:
Criterion 38: Analyse and assess harms.
This includes:
Criterion 39: Mitigate harms by embedding mechanisms for prevention, detection, and intervention.
This includes:
Criterion 40: Design the system to allow calibration at deployment.
This includes:
Criterion 41: Identify, assess, and select metrics appropriate to the AI system.
Relying on a single metric could lead to false confidence, while tracking irrelevant metrics could lead to false incidents. To mitigate these risks, analyse the capabilities and limitations of each metric, select multiple complementary metrics, and implement methods to test assumptions and to find missing information.
Considerations for metrics includes:
After metrics have been identified, understand and assess the trade-offs between the metrics.
This includes:
Criterion 42: Reevaluate the selection of appropriate success metrics as the AI system moves through the AI lifecycle.
Criterion 43: Continuously verify correctness of the metrics.
Before relying on the metrics, verify the following:
Criterion 44: Create and collect data for the AI system and identify the purpose for its use.
It is important to identify:
Criterion 45: Plan for data archival and destruction.
Consider the following:
Criterion 46: Analyse data for use by mapping the data supply chain and ensuring traceability.
Mapping the data supply chain to the AI system involves capturing how data will be stored, shared, and processed, particularly at the training and testing stages, which involve regular injections of data. When mapping the data account for:
Ensuring traceability entails maintaining awareness of the flow of data across the AI system.
This includes:
Criterion 47: Implement practices to maintain and reuse data.
This involves determining ongoing mechanisms for ensuring data is protected, accessible, and available for use in line with the original consent parameters.
Any changes in data scope, including expansion in scope and usage patterns, would need to be monitored and addressed.
Criterion 48: Implement processes to enable data access and retrieval, encompassing the sharing, archiving, and deletion of data.
Considerations include:
Criterion 49: Establish standard operating procedures for data orchestration.
This includes:
Practices to be defined include:
Criterion 50: Configure integration processes to integrate data in increments.
This includes:
Criterion 51: Implement automation processes to orchestrate the reliable flow of data between systems and platforms.
Criterion 52: Perform oversight and regular testing of task dependencies.
This should involve having comprehensive backup plans in place to handle potential outages or incidents.
The following should be considered:
Criterion 53: Establish and maintain data exchange processes.
This includes: