Appendix B: Definitions

Artificial intelligence

While there are various definitions of what constitutes AI, for the purposes of this policy agencies are to apply the definition provided by the Organisation for Economic Co-operation and Development (OECD):

"An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."

Agencies may refer to further explanatory material on the OECD website.

Given the rapidly changing nature of AI, agencies should keep up to date on any updates or changes to this definition. The definition in this policy will be reviewed as the broader, whole-of-economy regulatory environment matures to ensure an aligned approach.

AI use case

This policy uses the term 'AI use case' in some of its requirements. AI systems can have one or more applications, which can each differ in their intended purpose, functionality and risk level. The policy focuses on conducting assessments and applying actions at the AI use case level. Agencies are to use the following definition of AI use case in applying this policy:

"An AI use case is a specific application of an AI system or systems to achieve certain objectives or perform certain tasks."

Some general-purpose AI solutions, such as Microsoft Copilot, may include a variety of use cases. When applying the policy to these solutions, agencies can choose one of the following approaches:

  • treat the AI solution as a single complex use case and undertake policy actions appropriate for the highest level of risk
  • treat each use case separately and undertake policy actions appropriate to each use case's respective risk, including appointing separate accountable use case owner and registering each in-scope use case on the agency's internal register.

AI incident

This policy requires agencies to provide a way to manage AI incidents through operationalising the responsible use of AI. The definition of an AI incident is still being considered globally. As this evolves, the policy will adapt to changes in the policy environment as necessary.

For the purposes of applying this policy, the following definition of an AI incident has been adapted from the OECD's draft definition to better suit the Australian government context:

"an event, circumstance or series of events where the development, use or malfunction of one or more AI systems by, or under the direction of, an Australian Government agency directly or indirectly leads to any of the following:

  1. injury or harm to the health of a person or groups of people;
  2. disruption of the management and operation of critical infrastructure;
  3. violations of human rights or harms arising from a breach of obligations under applicable laws, including intellectual property, privacy and Indigenous cultural and intellectual property;
  4. harm to property, communities or the environment."

In addition to the definition provided above, agencies may choose to designate additional circumstances that constitute an AI incident in their operating context.

Appendix C: In-scope AI use cases

Criteria and areas of consideration

At a minimum, an AI use case is in scope of this policy if any of the following apply:

  • The use, misuse or failure of AI could lead to more than insignificant harm to individuals, communities, organisations, the environment or the collective rights of cultural groups including First Nations peoples.
  • The use of AI will materially influence administrative decisions that affect individuals, communities, organisations, the environment or the collective rights of cultural groups including First Nations peoples.
  • It is possible the public will directly interact with, or be significantly impacted by, the AI or its outputs without human review.
  • The AI is designed to use personal or sensitive data[1] or security classified information[2].
  • It is deemed an elevated risk AI use case as directed by the DTA.

Agencies may wish to apply this policy to AI use cases that do not meet the above criteria. This includes use cases with specific characteristics or factors unique to an agency's operating environment that may benefit from applying an impact assessment and governance actions.

This policy has been designed to exclude incidental and lower risk uses of AI that do not meet the criteria. Incidental uses of AI may include off-the-shelf software with AI features such as grammar checks and internet searches with AI functionality. The policy recognises that incidental usage of AI will grow over time and focuses on uses that require additional oversight and governance.

In assessing whether a use case is in scope, agencies should also carefully consider AI use in the following areas:

  • recruitment and other employment-related decision making
  • automated decision making of discretionary decisions
  • administration of justice and democratic processes
  • law enforcement, profiling individuals, and border control
  • health
  • education
  • critical infrastructure.

While use cases in these areas are not automatically high-risk, they are more likely to involve risks that require careful attention through an impact assessment.

Experimentation

For the avoidance of doubt, agencies are not required to apply this policy if they are doing early-stage experimentation which does not:

  • commit to proceeding with a use case or to any design decisions that would affect implementation later
  • risk harming anyone
  • introduce or exacerbate any privacy or security risks.

If there is a likelihood of proceeding with the AI use case while experimenting in this phase, agencies should apply the policy. Agencies should also apply the Australian Government AI technical standard, which provides relevant information for developing use cases at each stage of the AI lifecycle.

Footnotes

[1] As defined by the Privacy Act 1988 (Cth).

[2] As defined by the Australian Government Protective Security Policy Framework

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.