Agencies must
Criterion 32: Identify human values requirements.
Human values represent what people deem important in life such as autonomy, simplicity, tradition, achievement, and social recognition.
This includes:
- using traditional requirement elicitation techniques such as surveys, interviews, group discussions and workshops to capture relevant human values for the AI use case
- translating human values into technical requirements, which may vary depending on the risk level and AI use case
- reviewing feedback to identify ignored human-values in the AI system
- understanding the hierarchy of human values and emphasising those with higher relevance
- considering social, economic, political, ethical, and legal values when designing AI systems
- considering human values that are domain specific and based on the context of the AI system.
Criterion 33: Establish a mechanism to inform users of AI interactions and output, as part of transparency.
Depending on use case this may include:
- incorporating visual cues on the AI product when applicable
- informing users when text, audio or visual messages addressed to them are generated by AI
- including visual watermarks to identify content generated by AI
- providing transparency on whether a user is interacting with a person, or system
- including a disclaimer on the limitations of the system
- displaying the relevance and currency of the information being provided
- persona level transparency adhering to need-to-know principles
- providing alternate channels where a user chooses not to use the AI system. This may include channels such as a non-AI digital interface, telephony, or paper.
Criterion 34: Design AI systems to be inclusive, ethical, and meets accessibility standards using appropriate mechanisms.
This includes:
- identifying affirmative actions or preferential treatment that apply for any person or specific stakeholder groups
- ensuring diversity and inclusion requirements, and guidelines, are met throughout the entire AI lifecycle
- providing justification to situations such as pro-social policy outcomes
- reviewing and revisiting ethical considerations throughout the AI system lifecycle.
Criterion 35: Define feedback mechanisms.
This includes:
- providing options to users on the type of feedback method they prefer
- providing users with the choice to dismiss feedback
- provide the user with the option to opt-out of the AI system
- ensuring measures to protect personal information and user privacy
- capturing implicit feedback to reflect user's preferences and interactions, such as accepting or rejecting recommendations, usage time, or login frequency
- capturing explicit feedback via surveys, comments, ratings, or written feedback.
Criterion 36: Define human oversight and control mechanisms.
This includes:
- identifying conditions and situations that need to be supervised and monitored by a human, conditions that need to be escalated by the system to a supervisor or operator for further review and approval, and conditions that should trigger transfer of control from the AI system to a supervisor or operator
- defining the system states, errors, and other relevant information that should be observable and comprehensible to an informed human
- defining the pathway for the timely intervention, decision override, or auditable system takeover by authorised internal users
- subsets of inputs and outputs that may result in harm should be recorded for monitoring, auditing, contesting, or validation. This will facilitate reviewing of false positives against inputs that triggered them, and of false negatives that result in harms
- identifying situations where a supervising human might become disengaged and designing the system to attract the operators attention
- map human oversight and control requirements to corresponding risks they mitigate
- identifying required personas and defining their roles
- adherence to privacy and security need-to-know principles.
Agencies should:
Criterion 37: Involve users in the design process.
The intention is to promote better outcomes for managing inclusion and accessibility by setting expectations at the beginning of the AI system lifecycle.
This includes:
- considering security guidance and the need-to-know principle
- involving users in defining requirements, evaluating, and trialling systems or products.