• "an event, circumstance or series of events where the development, use or malfunction of one or more AI systems by, or under the direction of, an Australian Government agency directly or indirectly leads to any of the following:

    1. injury or harm to the health of a person or groups of people;
    2. disruption of the management and operation of critical infrastructure;
    3. violations of human rights or harms arising from a breach of obligations under applicable laws, including intellectual property, privacy and Indigenous cultural and intellectual property;
    4. harm to property, communities or the environment."
  • In addition to the definition provided above, agencies may choose to designate additional circumstances that constitute an AI incident in their operating context.

  • Appendix C: In-scope AI use cases

    Criteria and areas of consideration

    At a minimum, an AI use case is in scope of this policy if any of the following apply:

    • The use, misuse or failure of AI could lead to more than insignificant harm to individuals, communities, organisations, the environment or the collective rights of cultural groups including First Nations peoples.
    • The use of AI will materially influence administrative decisions that affect individuals, communities, organisations, the environment or the collective rights of cultural groups including First Nations peoples.
    • It is possible the public will directly interact with, or be significantly impacted by, the AI or its outputs without human review.
    • The AI is designed to use personal or sensitive data[1] or security classified information[2].
    • It is deemed an elevated risk AI use case as directed by the DTA.

    Agencies may wish to apply this policy to AI use cases that do not meet the above criteria. This includes use cases with specific characteristics or factors unique to an agency's operating environment that may benefit from applying an impact assessment and governance actions.

    This policy has been designed to exclude incidental and lower risk uses of AI that do not meet the criteria. Incidental uses of AI may include off-the-shelf software with AI features such as grammar checks and internet searches with AI functionality. The policy recognises that incidental usage of AI will grow over time and focuses on uses that require additional oversight and governance.

    In assessing whether a use case is in scope, agencies should also carefully consider AI use in the following areas:

    • recruitment and other employment-related decision making
    • automated decision making of discretionary decisions
    • administration of justice and democratic processes
    • law enforcement, profiling individuals, and border control
    • health
    • education
    • critical infrastructure.

    While use cases in these areas are not automatically high-risk, they are more likely to involve risks that require careful attention through an impact assessment.

    Experimentation

    For the avoidance of doubt, agencies are not required to apply this policy if they are doing early-stage experimentation which does not:

    • commit to proceeding with a use case or to any design decisions that would affect implementation later
    • risk harming anyone
    • introduce or exacerbate any privacy or security risks.

    If there is a likelihood of proceeding with the AI use case while experimenting in this phase, agencies should apply the policy. Agencies should also apply the Australian Government AI technical standard, which provides relevant information for developing use cases at each stage of the AI lifecycle.

  • Footnotes

    [1] As defined by the Privacy Act 1988 (Cth).

    [2] As defined by the Australian Government Protective Security Policy Framework

  • About the AI impact assessment tool

    The impact assessment tool is for Australian Government teams working on an artificial intelligence (AI) use case. It helps teams identify, assess and manage AI use case impacts and risks against Australia's AI Ethics Principles. Understanding and managing AI use case impacts and risks is critical for effective AI governance and to fulfilling the Australian Government’s commitment to safe and responsible use of AI. The impact assessment tool supports the Policy for the responsible use of AI in government.

  • Disclaimer

    The Digital Transformation Agency (DTA) provides the AI impact assessment tool and supporting guidance to assist Australia Government agencies to assess their proposed use of artificial intelligence (AI). Agencies should not treat the tool or guidance as legal advice or as authorising proposed AI use. Agencies are responsible for any decisions relating to their use of AI and for seeking technical and legal advice as appropriate.

    Off
  • Downloadable version

  • Merits review

    Considers whether a decision made was the correct or preferable one in the circumstances, and may include internal review conducted by the agency or external review by the Administrative Review Tribunal.

    Where an action can be challenged via internal review (as permitted by relevant legislation), you should consider what processes are in place to allow for internal review of an action materially influenced by AI, for example, by another or more senior officer in the agency.

    Off
  • Judicial review

    Examines whether an action was lawful (for example, whether the decision maker had the power to make a decision or whether a legal error has occurred in making a decision), and is limited to actions which affect an individual's liberties, vested rights or legitimate expectations.

    Off
  • You should ensure review rights that ordinarily apply to human-made decisions or actions are not impacted or limited because an AI system has been used.

    Notifications discussed at section 9.1 should include information about available review mechanisms so that people can make informed decisions about disputing administrative actions.

    Ensure a person within your agency is able to answer questions in a court or tribunal about an administrative action taken by an AI system if that matter is ultimately challenged. Review mechanisms also impact on the obligation to provide reasons. For example, the Administrative Decisions (Judicial Review) Act 1977 gives applicants a right to request reasons for administrative decisions.


  •  

    Testing the impact assessment tool

    The DTA piloted an earlier draft of the impact assessment tool with 21 volunteer agencies from September to November 2024. Pilot participants provided valuable feedback that have informed further updates to the tool.

    This earlier draft of the tool was known as the ‘Pilot AI assurance framework’. Since then, the title has been updated to ‘AI impact assessment tool’, to better reflect its intended scope and purpose

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.