The challenges for government use of AI are complex and linked with other governance considerations, such as:
Across the lifecycle stages, agencies should consider:
Notes:
Agencies must consider intellectual property rights and ownership derived from procured services or datasets used (including general AI outputs) to comply with copyright law.
Management of bias in an AI system is critical to ensuring compliance with Australia’s anti-discrimination law.
All documents relating to the establishment, design, and governance of an AI implemented solution must be retained to comply with information management legislation.
Agencies must comply with data privacy and protection practices as per the Australian Privacy Principles.
Agencies must consider data and lineage compliance with Australian Government regulations.
Agencies should refer to the Policy of responsible use of AI in government to implement AI fundamentals training for all staff, regardless of their role. To support agencies with their implementation of the Policy, the DTA provides Guidance for staff training on AI.
Australian Government API guidelines mandate the use of semantic versioning.
Agencies should refer to the Australian parliamentary recommendations on AI including risk management, people capabilities, and implement measures for algorithmic bias.
Any infrastructure, both software and hardware, for AI services and solutions must adhere to Australian Government regulations and should consider security as priority as recommended by the Australian Government guidance on AI System Development, Deploying AI Systems Securely and Engaging with AI. The recommendations include secure well-architected environments, whether on-premises, cloud-based, or hybrid, to maintain the confidentiality, integrity, and availability of AI services.
Agencies using cloud-based systems should refer to Cloud Financial Optimisation (Cloud FinOps).
Agencies must consider security frameworks, controls and practices with respect to the Information security manual (ISM), Essential Eight maturity model, Protective Security Policy Framework and Strategies to mitigate cyber security incidents.
Reuse digital, ICT, data and AI solutions in line with the Australian Government Reuse standard. This includes pre-existing AI assets and components from organisational repositories or open-source platforms.
The Budget Process Operational Rules (BPORs) mandate that entities must consult with the DTA before seeking authority to come forward for Expenditure Review Committee agreement to digital and ICT-enabled New Policy Proposals, to meet the requirements of the Digital and ICT Investment Oversight Framework. Digital proposals likely to have financial implications of $30 million or more, may be subject to the ICT Investment Approval Process (IIAP).
Management of human, society and environmental impact should ensure alignment with National Agreement on Closing the Gap, Working for Women – A Strategy for Gender Equality, Australia’s Disability Strategy 2021-2031, National Plan to End Gender Based Violence, APS Net Zero Emissions by 2030 Strategy, Environmentally Sustainable Procurement Policy and Environmental impact assessment.
The DTA oversees sourcing of digital and ICT for the whole of government and provides a suite of policies and guidelines to support responsible procurement practices of agencies, such as the Procurement and Sourcing | aga and Lifecycle - BuyICT guidance. AI model clauses provide guidance for purchasing AI systems.
Date: 19 June 2025
Members toured the classified operations floor.
ASD and the Department of Home Affairs jointly led a discussion on high-level threats and the cybersecurity uplift and hardening required to address risks.
Services Australia, in partnership with the Australian Public Service Commission (APSC), established a Whole-of-Government (WofG) Multi-Disciplinary Team (MDT) to undertake discovery work which informed development of a pilot proposal for a campus approach to uplift APS digital skills. Members also endorsed this proposal.
Members discussed the AI in Government Action Plan initiative and current state of AI adoption across the APS, including the importance of leadership in driving confidence, capability and shared solutions across government.
The Committee noted and discussed the progress related to the myGov Investment Pipeline, agreed by Government in the 2024-25 Budget, with detail on the initial myGov Investment pipeline initiatives and future opportunities. The Committee was provided an update on the inaugural myGov Strategic Committee meeting, attended by 18 agencies across the Australian Government, held on 16 May 2025.
The date for the next SDDC meeting is 25 September 2025.
Designing AI systems that are effective, efficient, and ethical involves being clear on the problem, understanding the impacts of technical decisions, taking a design approach with humans at the centre and having a clear definition of success.
In the design stage agencies consider how the AI system will operate with and impact existing processes, people, data, and technology. This includes considering potential malfunctions and harms.
Without appropriate design an AI system could:
At the design stage agencies also determine the performance and reliability measures relevant to their AI system’s tasks. Considerations when selecting metrics include business, performance, safety, reliability, explainability, and transparency.
Agencies are recommended to apply the Digital Service Standard to existing staff facing services, though these services are not mandated.
The Digital Service Standard does not apply to:
State, territory or local government and third parties may choose to apply the Digital Service Standard to improve access and discoverability of their digital services.
Some services may request an exemption from the Digital Service Standard. See the Exemptions section below.
Notes:
Under the Digital Experience Policy agencies must meet design standards for digital services.
The Voluntary AI Safety Standard outlines the need to establish and implement a risk management process to identify and mitigate risks.
Data used by an AI system can be classified into development and deployment data.
Development data includes all inputs and outputs (and reference data for GenAI) used to develop the AI system. The dataset is made up of smaller datasets – train dataset, validation dataset, and test dataset.
Deployment data includes AI system inputs such as live production data, user input data, configuration data, and AI system outputs such as predictions, recommendations, classifications, logs, and system health data. Deployment stage inputs are new and previously unseen by the AI system.
The performance of an AI system is dependent on robust management of data quality and the availability of data.
Key workstreams within this stage include:
Notes:
Requirements for handling personal and sensitive data within AI systems are included in the Privacy Act, the Australian Privacy Principles, Privacy and Other Legislation Amendment Act 2024 and the Handling personal information guidance.
Data archival and destruction must comply with the Information management legislation.
The Framework for the Governance of Indigenous Data provides guidelines on Indigenous data sovereignty.
The Office of the Australian Information Centre (OAIC) provides Guidelines on data matching in Australian Government administration, which agencies must consider prior to data integration and fusion activities.
The Information management for records created using Artificial Intelligence (AI) technologies | naa.gov.au provides guidelines to manage data for AI.
The Data Availability and Transparency Act 2022 (DATA Scheme) requires agencies to identify data as open, shared, or closed.
The Guidelines for data transfers | Cyber.gov.au provide guidance on the processes and procedures for data transfers and transmissions.
The APS Data Ethics Use Cases provide guidance for agencies to manage and mitigate data bias.
The report on Responding to societal challenges with data | OECD provides guidance on data access, sharing, and reuse of data.
The DTA acknowledge that some agencies may be unable to meet one or more of the criteria set out by the Digital Service Standard due to a range of circumstances. These circumstances may include but are not limited to:
Exemptions may be granted for one or more of the criteria set out by the Digital Service Standard. This will be assessed on a case-by-case basis. Exemptions must be applied for through the DTA.
Further information can be found in the Digital Experience Policy Exemption Guide.
Note: Even if a service or website is not covered by the Digital Service Standard, or an exemption is received, obligations may still apply under relevant Australian legislation, for example accessibility requirements under the Disability Discrimination Act 1992.
OffAI training involves processing large amounts of data to enable AI models to recognise patterns, make predictions, draw inferences, and generate content. This process creates a mathematical model with parameters that can range from a few to trillions. Training an AI model might require adjustment of these parameters, entailing increased processing power and storage.
Training a model can be compute-heavy, relying on infrastructure that may be significantly expensive. The model architecture, including choice of the AI algorithm and learning strategy, together with the size of the model dataset, will influence the infrastructure requirements for the training environment.
The AI Model encapsulates a complex mathematical relationship between input and output data that it derives from patterns in a modelling dataset. AI models can be chained together to provide more complex capabilities.
Pre-processing and post-processing augment the capabilities of the AI model. Application, platform, and infrastructure components are shown here as well as they all contribute to the overall behaviour and performance of the whole AI system.
Due to the number of mathematical computations involved and time taken to execute them, training can be a highly intensive stage of the AI lifecycle. This will depend on the infrastructure resources available, the algorithms used to train the AI model and the size of the training datasets.
Key considerations during this stage include:
If after multiple attempts of refinement, the model does not meet requirements or success criteria, a new model may need to be created, business requirements updated, or the model is retired.
See the Design lifecycle stage for details on measuring model outputs, as well as business and user feedback, to manage AI model performance.
See the Apply version control practices statement in the Whole of AI lifecycle section for detail on tracking changes to training models, trained models, algorithms, learning types, and hyperparameters.