Statement 11: Design safety systemically
Agencies must:
Criterion 38: Analyse and assess harms.
This includes:
- Utilising functional safety standards that provide frameworks for a systematic and robust harms analysis.
Criterion 36: Mitigate harms by embedding mechanisms for prevention, detection, and intervention.
This includes:
- designing the system to avoid the sources of harm
- designing the system to detect the sources of harm
- designing the system to check and filter its inputs and outputs for harm
- designing the system to check for sensitive information disclosure
- designing the system to monitor faults in its operation
- designing the system with redundancy
- designing intervention mechanisms such as warnings to users and operators, automatic recovery to a safe state, transfer of control, and manual override
- designing the system to log the harms and faults it detects
- designing the system to disengage safely as per requirements
- for physical systems, designing proper protective equipment and procedures for safe handling
- ensuring the system meets privacy security requirements and adheres to the need-to-know principle for information security.
Agencies should
Criterion 40: Design the system to allow calibration at deployment.
This includes:
- where initial setup parameters are critical to the performance, reliability, and safety of the AI system.
- where initial setup parameters are critical to the performance, reliability, and safety of the AI system.