Notes:
The Pilot AI Assurance Framework recommends understanding the implications of decommissioning an AI system to ensure agencies can address all potential consequences.
The Voluntary AI Safety Standard promotes proactive stakeholder engagement during the retirement stage, as well as the importance of maintaining detailed records.
The Policy for the responsible use of AI in government encourages business and technology process enhancement to assist APS AI capability uplift over time.
The standard adopts an agency-first approach. Rather than introducing new processes or duplication, it emphasises the reuse of agency policies, frameworks and practices.
Agencies may choose to combine the standard with existing frameworks, such as project governance or data, to include AI-related activities. The standard complements existing frameworks and legislation to help ensure agencies meet their obligations in the use of AI.
The challenges for government use of AI are complex and linked with other governance considerations, such as:
While not exhaustive, a list of related existing frameworks and related resources is provided by the Policy for the responsible use of AI in government.
The practices outlined in this document take the form of standard statements, criteria, and explanatory notes.
The level of detail and implementation of each statement will vary across use cases. Practical use case guidance has been provided in the Use Case Applications section of the standard.
The standard is applicable regardless of whether an agency develops an AI system in-house or contracts an external provider to build or supply it. Engaging external providers does not prevent agencies from implementing each of the criteria in the statements. Agencies that adopt the standard are accountable for ensuring it is met in line with the required and recommended criterion.
Transparency documents can be utilised to support assessments, including open-source software.
For early experimentation, proof of concept, and pilots of AI products and services, the standard should be used to provide guidance for building responsible and safe AI systems, ensuring a clear pathway to production.
The standard helps government:
The standard applies to:
Examples of the types of AI considered for the standard includes machine learning, computer vision, deep learning, artificial neural networks, generative AI (GenAI) or any combinations of these.
While the below list is out of scope, agencies can adapt and apply the standard at their own discretion:
The standard does not define, but works in conjunction with, the following:
Entities external to government agencies includes the following:
The standard will impact roles and responsibilities at varying organisational levels. The following functions may be impacted and assisted by the standard, noting that individuals may perform multiple roles: