The principles and requirements included in this section are designed to enable a forward-leaning approach to agency AI adoption. These requirements set out AI accountability at the agency level and use case level and seek to build trust through transparency.
Agencies may refer to further explanatory material on the OECD website.
Given the rapidly changing nature of AI, agencies should keep up to date on any updates or changes to this definition. The definition in this policy will be reviewed as the broader, whole-of-economy regulatory environment matures to ensure an aligned approach.
This policy uses the term 'AI use case' in some of its requirements. AI systems can have one or more applications, which can each differ in their intended purpose, functionality and risk level. The policy focuses on conducting assessments and applying actions at the AI use case level. Agencies are to use the following definition of AI use case in applying this policy:
Some general-purpose AI solutions, such as Microsoft Copilot, may include a variety of use cases. When applying the policy to these solutions, agencies can choose one of the following approaches:
This policy requires agencies to provide a way to manage AI incidents through operationalising the responsible use of AI. The definition of an AI incident is still being considered globally. As this evolves, the policy will adapt to changes in the policy environment as necessary.
For the purposes of applying this policy, the following definition of an AI incident has been adapted from the OECD's draft definition to better suit the Australian government context:
In addition to the definition provided above, agencies may choose to designate additional circumstances that constitute an AI incident in their operating context.
While this section lists frameworks that are related to AI, it is not exhaustive. Agencies should consider which existing frameworks apply to them and their specific AI use cases.
Artificial Intelligence (AI) model clauses
Australia's AI Ethics Principles
Engaging with Artificial Intelligence (AI) guidance
Guidance on privacy and the use of commercially available AI products
Information management for records created using Artificial Intelligence (AI) technologies
Public generative AI tools: managing access
Public generative AI tools: using safely and responsibly
Technical standard for government's use of artificial intelligence
Automated Decision-making Better Practice Guide
At a minimum, an AI use case is in scope of this policy if any of the following apply:
Agencies may wish to apply this policy to AI use cases that do not meet the above criteria. This includes use cases with specific characteristics or factors unique to an agency's operating environment that may benefit from applying an impact assessment and governance actions.
This policy has been designed to exclude incidental and lower risk uses of AI that do not meet the criteria. Incidental uses of AI may include off-the-shelf software with AI features such as grammar checks and internet searches with AI functionality. The policy recognises that incidental usage of AI will grow over time and focuses on uses that require additional oversight and governance.
In assessing whether a use case is in scope, agencies should also carefully consider AI use in the following areas:
While use cases in these areas are not automatically high-risk, they are more likely to involve risks that require careful attention through an impact assessment.
For the avoidance of doubt, agencies are not required to apply this policy if they are doing early-stage experimentation which does not:
If there is a likelihood of proceeding with the AI use case while experimenting in this phase, agencies should apply the policy. Agencies should also apply the Australian Government AI technical standard, which provides relevant information for developing use cases at each stage of the AI lifecycle.
The impact assessment tool is for Australian Government teams working on an artificial intelligence (AI) use case. It helps teams identify, assess and manage AI use case impacts and risks against Australia's AI Ethics Principles. Understanding and managing AI use case impacts and risks is critical for effective AI governance and to fulfilling the Australian Government’s commitment to safe and responsible use of AI. The impact assessment tool supports the Policy for the responsible use of AI in government.
The Digital Transformation Agency (DTA) provides the AI impact assessment tool and supporting guidance to assist Australia Government agencies to assess their proposed use of artificial intelligence (AI). Agencies should not treat the tool or guidance as legal advice or as authorising proposed AI use. Agencies are responsible for any decisions relating to their use of AI and for seeking technical and legal advice as appropriate.
OffConsiders whether a decision made was the correct or preferable one in the circumstances, and may include internal review conducted by the agency or external review by the Administrative Review Tribunal.
Where an action can be challenged via internal review (as permitted by relevant legislation), you should consider what processes are in place to allow for internal review of an action materially influenced by AI, for example, by another or more senior officer in the agency.
Off