-
The lifecycles statements
-
Whole of AI lifecycle: statements 1 - 8
-
-
-
Whole of AI lifecycle includes statements that apply across multiple AI product lifecycle stages, for ease of use and to minimise content duplication.
-
The challenges for government use of AI are complex and linked with other governance considerations, such as:
- the APS Code of Conduct
- data governance
- cyber security
- ICT infrastructure
- privacy
- sourcing and procurement
- copyright
- ethics practices.
Across the lifecycle stages, agencies should consider:
- technology operations – to ensure compliance, efficiency, and ethical standards
- reference architecture – to provide structured frameworks that guide the design, development, and management of AI solutions
- people capabilities – having the specialised skills required for successful implementation
- auditability – enabling external scrutiny, supporting transparency, and accountability
- explainability – identifying what needs to be explained and when, making complex AI processes transparent and trustworthy
- system bias – maintaining the role of positive bias in delivering meaningful outcomes, while mitigating the source and impacts of problematic bias
- version control – tracking and managing changes to information to inform stakeholder decision-making
- watermarking – to embed visual or hidden markers into generated content so that its creation details can be identified.
-
Notes:
Agencies must consider intellectual property rights and ownership derived from procured services or datasets used (including general AI outputs) to comply with copyright law.
Management of bias in an AI system is critical to ensuring compliance with Australia’s anti-discrimination law.
All documents relating to the establishment, design, and governance of an AI implemented solution must be retained to comply with information management legislation.
Agencies must comply with data privacy and protection practices as per the Australian Privacy Principles.
Agencies must consider data and lineage compliance with Australian Government regulations.
Agencies should refer to the Policy of responsible use of AI in government to implement AI fundamentals training for all staff, regardless of their role. To support agencies with their implementation of the Policy, the DTA provides Guidance for staff training on AI.
Australian Government API guidelines mandate the use of semantic versioning.
Agencies should refer to the Australian parliamentary recommendations on AI including risk management, people capabilities, and implement measures for algorithmic bias.
Any infrastructure, both software and hardware, for AI services and solutions must adhere to Australian Government regulations and should consider security as priority as recommended by the Australian Government guidance on AI System Development, Deploying AI Systems Securely and Engaging with AI. The recommendations include secure well-architected environments, whether on-premises, cloud-based, or hybrid, to maintain the confidentiality, integrity, and availability of AI services.
Agencies using cloud-based systems should refer to Cloud Financial Optimisation (Cloud FinOps).
Agencies must consider security frameworks, controls and practices with respect to the Information security manual (ISM), Essential Eight maturity model, Protective Security Policy Framework and Strategies to mitigate cyber security incidents.
Reuse digital, ICT, data and AI solutions in line with the Australian Government Reuse standard. This includes pre-existing AI assets and components from organisational repositories or open-source platforms.
The Budget Process Operational Rules (BPORs) mandate that entities must consult with the DTA before seeking authority to come forward for Expenditure Review Committee agreement to digital and ICT-enabled New Policy Proposals, to meet the requirements of the Digital and ICT Investment Oversight Framework. Digital proposals likely to have financial implications of $30 million or more, may be subject to the ICT Investment Approval Process (IIAP).
Management of human, society and environmental impact should ensure alignment with National Agreement on Closing the Gap, Working for Women – A Strategy for Gender Equality, Australia’s Disability Strategy 2021-2031, National Plan to End Gender Based Violence, APS Net Zero Emissions by 2030 Strategy, Environmentally Sustainable Procurement Policy and Environmental impact assessment.
The DTA oversees sourcing of digital and ICT for the whole of government and provides a suite of policies and guidelines to support responsible procurement practices of agencies, such as the Procurement and Sourcing | aga and Lifecycle - BuyICT guidance. AI model clauses provide guidance for purchasing AI systems.
-
Statement 1
-
Secretaries Digital and Data Committee communique
Date: 19 June 2025
Strategic Discussion: Strengthening Cyber Security and Building Resilience
Australian Signals Directorate (ASD) Site Tour
Members toured the classified operations floor.
Strengthening Cyber Security and Building Resilience
ASD and the Department of Home Affairs jointly led a discussion on high-level threats and the cybersecurity uplift and hardening required to address risks.
Australian Public Service (APS) Digital Skills Program (Pilot) – Discovery findings and pilot proposal
Services Australia, in partnership with the Australian Public Service Commission (APSC), established a Whole-of-Government (WofG) Multi-Disciplinary Team (MDT) to undertake discovery work which informed development of a pilot proposal for a campus approach to uplift APS digital skills. Members also endorsed this proposal.
Adoption of GenAI in Government
Members discussed the AI in Government Action Plan initiative and current state of AI adoption across the APS, including the importance of leadership in driving confidence, capability and shared solutions across government.
myGov Investment Pipeline
The Committee noted and discussed the progress related to the myGov Investment Pipeline, agreed by Government in the 2024-25 Budget, with detail on the initial myGov Investment pipeline initiatives and future opportunities. The Committee was provided an update on the inaugural myGov Strategic Committee meeting, attended by 18 agencies across the Australian Government, held on 16 May 2025.
The date for the next SDDC meeting is 25 September 2025.
-
Statements
-
Design: statements 9 - 12
-
Designing AI systems that are effective, efficient, and ethical involves being clear on the problem, understanding the impacts of technical decisions, taking a design approach with humans at the centre and having a clear definition of success.
In the design stage agencies consider how the AI system will operate with and impact existing processes, people, data, and technology. This includes considering potential malfunctions and harms.
Without appropriate design an AI system could:
- cause harm due to incorrect information, caused by AI hallucinations, false positives, or false negatives
- be used beyond their purpose
- perpetuate existing injustices
- be misused, misunderstood, or abused
- be susceptible to malfunctions of another interacting system
- experience behaviour and performance issues caused by other external factors.
At the design stage agencies also determine the performance and reliability measures relevant to their AI system’s tasks. Considerations when selecting metrics include business, performance, safety, reliability, explainability, and transparency.
-
The design stage includes concept development, requirements engineering, and solution design.
-
Services not covered by the Digital Service Standard
Agencies are recommended to apply the Digital Service Standard to existing staff facing services, though these services are not mandated.
The Digital Service Standard does not apply to:
- state, territory or local government services
- personal ministerial websites that contain material on a minister’s political activities or views on issues not related to their ministerial role.
State, territory or local government and third parties may choose to apply the Digital Service Standard to improve access and discoverability of their digital services.
Some services may request an exemption from the Digital Service Standard. See the Exemptions section below.
-
Notes:
Under the Digital Experience Policy agencies must meet design standards for digital services.
The Voluntary AI Safety Standard outlines the need to establish and implement a risk management process to identify and mitigate risks.
-
-
-
Data: statements 13 - 19
-
The data stage involves establishing the processes and responsibilities for managing data across the AI lifecycle. This stage includes data used in experimenting, training, testing, and operating AI systems.
-
Data used by an AI system can be classified into development and deployment data.
Development data includes all inputs and outputs (and reference data for GenAI) used to develop the AI system. The dataset is made up of smaller datasets – train dataset, validation dataset, and test dataset.
- Train dataset – this dataset is used to train the AI system. The AI system learns patterns in the train dataset. The train dataset is the largest subset of the modelling dataset. For GenAI, the train dataset may also include reference or contextual datasets such as retrieval-augmented generation (RAG) datasets and prompt datasets
- Validation dataset – this dataset is used to evaluate the model's performance during model training. It is used to fine-tune and select the best-performing model, such as through cross validation
- Test dataset – this dataset is used to evaluate the final model's performance on previously unseen data. This dataset helps provide unbiased evaluation of model performance.
Deployment data includes AI system inputs such as live production data, user input data, configuration data, and AI system outputs such as predictions, recommendations, classifications, logs, and system health data. Deployment stage inputs are new and previously unseen by the AI system.
The performance of an AI system is dependent on robust management of data quality and the availability of data.
Key workstreams within this stage include:
- data orchestration – establishing central oversight of and planning the flow of data to an AI system from across datasets
- data transformation – converting and optimising data for use by the AI system
- feature engineering – methods to improve AI model training to better identify and learn patterns in the data
- data quality – measuring dimensions of a dataset associated with greater performance and reliability
- data validation – testing the consistency, accuracy, and reliability of the data to ensure it meets the requirements of the AI system
- data integration and fusion – combining data from multiple sources to synchronise the flow of data to the AI system
- data sharing – promoting reuse, reducing resources required for collection and analysis, and helping to build interoperability between systems and datasets
- model dataset establishment – using real-world production data to build, refine, and contextualise a high-quality AI model.
-
-
Notes:
Requirements for handling personal and sensitive data within AI systems are included in the Privacy Act, the Australian Privacy Principles, Privacy and Other Legislation Amendment Act 2024 and the Handling personal information guidance.
Data archival and destruction must comply with the Information management legislation.
The Framework for the Governance of Indigenous Data provides guidelines on Indigenous data sovereignty.
The Office of the Australian Information Centre (OAIC) provides Guidelines on data matching in Australian Government administration, which agencies must consider prior to data integration and fusion activities.
The Information management for records created using Artificial Intelligence (AI) technologies | naa.gov.au provides guidelines to manage data for AI.
The Data Availability and Transparency Act 2022 (DATA Scheme) requires agencies to identify data as open, shared, or closed.
The Guidelines for data transfers | Cyber.gov.au provide guidance on the processes and procedures for data transfers and transmissions.
The APS Data Ethics Use Cases provide guidance for agencies to manage and mitigate data bias.
The report on Responding to societal challenges with data | OECD provides guidance on data access, sharing, and reuse of data.
-
Train: statements 20 - 25
-
The train stage covers the creation and selection of models and algorithms. The key activities in this stage include modelling, pre- and post-processing, model refinements, and fine-tuning. It also considers the use of pre-trained models and associated fine-tuning for the operational context.
-
Exemptions
The DTA acknowledge that some agencies may be unable to meet one or more of the criteria set out by the Digital Service Standard due to a range of circumstances. These circumstances may include but are not limited to:
- legacy technology barriers that the agency cannot reasonably overcome
- substantial financial burden caused by changing a service to meet criteria.
Exemptions may be granted for one or more of the criteria set out by the Digital Service Standard. This will be assessed on a case-by-case basis. Exemptions must be applied for through the DTA.
Further information can be found in the Digital Experience Policy Exemption Guide.
Note: Even if a service or website is not covered by the Digital Service Standard, or an exemption is received, obligations may still apply under relevant Australian legislation, for example accessibility requirements under the Disability Discrimination Act 1992.
Off
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.