Statement 11: Design safety systemically

Agencies must:

  • Criterion 38: Analyse and assess harms.

    This includes:

    • Utilising functional safety standards that provide frameworks for a systematic and robust harms analysis.
  • Criterion 36: Mitigate harms by embedding mechanisms for prevention, detection, and intervention.

    This includes:

    • designing the system to avoid the sources of harm
    • designing the system to detect the sources of harm
    • designing the system to check and filter its inputs and outputs for harm
    • designing the system to check for sensitive information disclosure
    • designing the system to monitor faults in its operation
    • designing the system with redundancy
    • designing intervention mechanisms such as warnings to users and operators, automatic recovery to a safe state, transfer of control, and manual override
    • designing the system to log the harms and faults it detects
    • designing the system to disengage safely as per requirements
    • for physical systems, designing proper protective equipment and procedures for safe handling
    • ensuring the system meets privacy security requirements and adheres to the need-to-know principle for information security.

Agencies should

  • Criterion 40: Design the system to allow calibration at deployment.

    This includes:

    • where initial setup parameters are critical to the performance, reliability, and safety of the AI system.
       

Statement 12: Define success criteria

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.