-
Downloadable resource
-
Technical standard for government’s use of artificial intelligence
Navigate your way through the standard. -
-
-
Navigate your way through the standard
-
-
Technical standard for government’s use of artificial intelligence
Navigate your way through the standard. -
Navigate your way through the standard
-
-
Introduction
-
The AI technical standard (the standard) sets consistent practices for government agencies adopting artificial intelligence (AI) systems across the AI lifecycle.
-
The standard brings together a set of practices for procuring, designing, developing, deploying, and using AI systems. The standard reinforces the Australian Government’s AI Ethics Principles into a set of technical requirements and guidelines. It complements the Policy for the responsible use of AI in government, the AI Assurance framework, and the Voluntary AI Safety Standard.
The standard adopts the OECD definition of an Artificial Intelligence (AI) system:
-
Key terms
-
AI incident
An event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms:
- injury or harm to the health of a person or groups of people
- disruption of the management and operation of critical infrastructure
- violations of human rights or a significant breach of obligations under applicable laws, including intellectual property, privacy and Indigenous cultural and intellectual property
- harm to property, communities or the environment.
AI model
‘A model is defined as a “physical, mathematical or otherwise logical representation of a system, entity, phenomenon, process or data” in the ISO/IEC 22989 standard. AI models include, among others, statistical models and various kinds of input-output functions (such as decision trees and neural networks). An AI model can represent the transition dynamics of the environment, allowing an AI system to select actions by examining their possible consequences using the model. AI models can be built manually by human programmers or automatically through, for example, unsupervised, supervised, or reinforcement machine learning techniques.’ OECD definition.
AI system
‘An Artificial Intelligence (AI) system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.' OECD definition.
AI watermarking
Information embedded into digital content, either perceptibly or imperceptibly by humans, that can serve a variety of purposes, such as establishing digital content provenance or informing stakeholders that the contents are AI-generated or significantly modified.
AI-generated content watermarking: a procedure by which watermarks are embedded into AI-generated content. This embedding can occur at 2 distinct stages: during generation by altering a GenAI model's inference procedure or post-generation, as the content is distributed along the data and information distribution chain. C2PA.Algorithm
‘A clearly specified mathematical process for computation; a set of rules that, if followed, will give a prescribed result ’ NIST definition.
Application programming interface (API)
‘A system access point or library function that has a well-defined syntax and is accessible from application programs or user code to provide well-defined functionality. ’ NIST definition.
Artificial general intelligence (AGI)
‘Artificial general intelligence (AGI), also known as strong AI, is the (currently hypothetical) intelligence of a machine that can accomplish any intellectual task that a human can perform. AGI is a trait attributed to future autonomous AI systems that can achieve goals in a wide range of real or virtual environments at least as effectively as humans can.’ Gartner.
Bias
‘systematic difference in treatment of certain objects, people, or groups in comparison to others’ – ISO/IEC 24027.
C2PA
The Coalition for Content Provenance and Authenticity, or C2PA, provides an open technical standard for publishers, creators and consumers to establish the origin and edits of digital content.
Classification model
‘Machine learning model whose expected output for a given input is one or more classes’ ISO/IEC 23053.
Data labelling
‘data labelling, in which datasets are labelled, which means that samples are associated with target variables.’ ISO/IEC 22989.
Dataset
‘collection of data with a shared format’ ISO/IEC 22989.
Explainability
‘property of an AI system to express important factors influencing the AI system results in a way that humans can understand’ ISO/IEC 22989.
-
Updates to the Digital Service Standard 2.0
-
A - E
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.