Statement 5: Provide explainability based on the use case

Agencies must

  • Criterion 12: Explain the AI system and technology used, including the limitations and capabilities of the system.

    AI algorithms and technologies such as deep learning models, are often seen as 'black boxes'. This can make it difficult to understand how they work and the factors that generate outcomes. Providing clear and understandable explanations of AI outputs helps maintain trust and transparency with AI systems.

    Explainability on the specific context of the use case ensures clear understanding and reasoning behind AI system output. This supports accountability, trust, and ethical considerations.

    This may include:

    • explaining the AI system such as:
      • consideration of trade-offs such as cost and performance
      • what changes are made with AI system updates
      • how feedback is used in improving AI system performance
      • whether the AI system is static or learns from user behaviour
      • whether AI techniques would provide clearer explanations and validate AI actions and decisions.
    • use cases that are impacted by legislation, regulation, rules, or third-party involvement
    • explain how the system operates including situations that require human intervention
    • explain technical and governance mechanisms that ensure ethical outcomes from the use of an AI system
    • inform stakeholders when changes are made to the system
    • persona level explainability adhering to need-to-know principles.

Agencies should:

  • Criterion 13: Explain outputs made by the AI system to end users.

    This typically includes:

    • explaining:
      • AI outputs that have serious consequences
      • how outputs are based on the data used
      • consequences of system actions and user interactions
      • errors
      • high-risk situations
      • avoid explanations that are confusing or misleading
    • using a variety of methods to explain outputs.
  • Criterion 14: Explain how data is used and shared by the AI system.

    This includes: 

    • how personal and organisational data is used and shared between the AI system and other applications
    • who can access the data
    • where identified data has been used, or will be used, for AI system training.
       

Statement 6: Manage system bias

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.