The practices described in the standard use a reference AI lifecycle model to ensure holistic coverage of an AI system from inception to retirement, as shown in the 'AI lifecycle diagram' below.
The statements and criteria outlined in this standard are structured according to the relevant lifecycle stages and are intended to be implemented through an iterative process.
The AI system lifecycle is a structured process that occurs in stages, ensuring the holistic coverage of the AI system from discovery to retirement.
The AI lifecycle stages include:
This lifecycle model is based on the Voluntary AI Safety Standard.
AI lifecycle diagram
AI system development is generally an iterative approach. At any point of the lifecycle, issues, risks, or opportunities may be discovered for improvement that could prompt changes to system requirements, design, data, model, or test cases. After deployment, feedback and issues could prompt changes to the requirements.
Each agency may have existing architecture and processes relating to the adoption and implementation of AI systems. The standard complements existing architecture and processes.
The Policy for the responsible use of AI in government encourages continuous improvement to enable AI capability uplift.
The challenges for government use of AI are complex and linked with other governance considerations, such as:
Across the lifecycle stages, agencies should consider:
Agencies must consider intellectual property rights and ownership derived from procured services or datasets used (including general AI outputs) to comply with copyright law.
Management of bias in an AI system is critical to ensuring compliance with Australia’s anti-discrimination law.
All documents relating to the establishment, design, and governance of an AI implemented solution must be retained to comply with information management legislation.
Agencies must comply with data privacy and protection practices as per the Australian Privacy Principles.
Agencies must consider data and lineage compliance with Australian Government regulations.
Agencies should refer to the Policy of responsible use of AI in government to implement AI fundamentals training for all staff, regardless of their role. To support agencies with their implementation of the Policy, the DTA provides Guidance for staff training on AI.
Australian Government API guidelines mandate the use of semantic versioning.
Agencies should refer to the Australian parliamentary recommendations on AI including risk management, people capabilities, and implement measures for algorithmic bias.
Any infrastructure, both software and hardware, for AI services and solutions must adhere to Australian Government regulations and should consider security as priority as recommended by the Australian Government guidance on AI System Development, Deploying AI Systems Securely and Engaging with AI. The recommendations include secure well-architected environments, whether on-premises, cloud-based, or hybrid, to maintain the confidentiality, integrity, and availability of AI services.
Agencies using cloud-based systems should refer to Cloud Financial Optimisation (Cloud FinOps).
Agencies must consider security frameworks, controls and practices with respect to the Information security manual (ISM), Essential Eight maturity model, Protective Security Policy Framework and Strategies to mitigate cyber security incidents.
Reuse digital, ICT, data and AI solutions in line with the Australian Government Reuse standard. This includes pre-existing AI assets and components from organisational repositories or open-source platforms.
The Budget Process Operational Rules (BPORs) mandate that entities must consult with the DTA before seeking authority to come forward for Expenditure Review Committee agreement to digital and ICT-enabled New Policy Proposals, to meet the requirements of the Digital and ICT Investment Oversight Framework. Digital proposals likely to have financial implications of $30 million or more, may be subject to the ICT Investment Approval Process (IIAP).
Management of human, society and environmental impact should ensure alignment with National Agreement on Closing the Gap, Working for Women – A Strategy for Gender Equality, Australia’s Disability Strategy 2021-2031, National Plan to End Gender Based Violence, APS Net Zero Emissions by 2030 Strategy, Environmentally Sustainable Procurement Policy and Environmental impact assessment.
The DTA oversees sourcing of digital and ICT for the whole of government and provides a suite of policies and guidelines to support responsible procurement practices of agencies, such as the Procurement and Sourcing | aga and Lifecycle - BuyICT guidance. AI model clauses provide guidance for purchasing AI systems.
Criterion 1: Identify a suitable operational model to design, develop, and deliver the system securely and efficiently.
Implementing effective operational models for AI systems needs careful consideration to ensure compliance, efficiency, and ethical standards. They also provide tools for traceability, reproducibility, and modularity.
Existing operational models can be used or extended for AI systems. Operational models can streamline the iterative nature of design, and develop and deliver AI systems more securely, efficiently, and reliably. Some examples include:
The above list contains examples that are at varying levels of abstraction. For example, LLMOps is a type of MLOps as it inherits many of the same properties.
Ensure governance and security are integrated into the operational model.
Criterion 2: Consider the technology impacts of the operating model.
These include:
Note: The source of the impacts listed will be tied to selection decisions of the data and model, and additional training applied to the model.
Criterion 3: Consider suitable technology hosting strategies.
The hosting strategy can involve one of the following models:
The strategy to adopt should consider:
The Service Standard is mandatory and applies to informational and transactional digital services that are:
The use of a reference architecture provides a structured framework that guides the design, development, and management of an AI system.
Criterion 4: Evaluate existing reference architectures.
Make use of the Australian Government Architecture to:
Criterion 5: Monitor emerging reference architectures to evaluate and update the AI system.
New architectural paradigms are emerging that address complex AI applications, including:
The standard was assessed against a selection of use cases across government agencies. Outcomes were collated to identify how the standard can be used across each lifecycle stage.
The assessment considered:
The applicability of the standard varied, based on who built each part of the AI system:
Applicability of the statements in the standard was tested against each AI use case. The process determined whether the standard could be applied to the use case or not. In some cases, such as when a pre-trained model is used, the applicability may be conditional. This means that the applicability depends on the use case, vendor responsibility, and how AI is integrated into the environment.
Applicability of the standard has been categorised as:
The following table shows the applicability of the standard against each lifecycle phase:
| Phase | Built and managed in-house | Partially built and fully managed in-house | Largely built and managed externally | Incidental usage of AI |
|---|---|---|---|---|
| Whole of AI Lifecycle | Applicable | Applicable | Applicable | N/A |
| Design | Applicable | Applicable | Applicable | N/A |
| Data | Applicable | Conditional | Conditional | N/A |
| Train | Applicable | Conditional | Conditional | N/A |
| Evaluate | Applicable | Applicable | Conditional | N/A |
| Integrate | Applicable | Applicable | Conditional | N/A |
| Deploy | Applicable | Applicable | Conditional | N/A |
| Monitor | Applicable | Applicable | Applicable | N/A |
| Decommission | Applicable | Applicable | Applicable | N/A |
Informational services provide information to users, such as reports, fact sheets or videos. They may include: