-
Version 2.0
-
Policy aims
This policy aims to ensure that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations.
Embrace the opportunity
This policy aims to provide a unified approach to enable government to accelerate AI adoption and embrace the AI opportunity. It is designed to reduce barriers to government adoption by helping agencies confidently approach AI governance and implementation.
It aims to ensure agencies have the right settings in place to take advantage of the opportunities presented by AI and fully realise benefits such as increased and improved efficiency, accuracy and service delivery.
Strengthen public trust
This policy aims to strengthen public trust in government adoption of AI by positioning the Australian Government as an exemplar in safe and responsible AI use.
It is designed to enable the responsible use of AI across government, through setting consistent requirements for transparency and accountability, and by requiring risk-based oversight of AI use cases.
Adapt to change
This policy aims to embed a forward-leaning, adaptive approach for government’s use of AI that evolves as the technological and policy environment changes.
It supports agencies at different stages of their AI adoption journey and sets requirements that scale with the agency’s use of AI.
-
Implementation
Application
This version of the policy (v2.0) is effective 15 December 2025. It replaces version v1.1 of the policy which came into effect 1 September 2024.
All non-corporate Commonwealth entities (NCEs), as defined by the Public Governance, Performance and Accountability Act 2013, must apply this policy.
Corporate Commonwealth entities are also encouraged to apply this policy.
National security carveouts
This policy does not apply to:
- the use of AI in the defence portfolio.
- the 'national intelligence community' (NIC) as defined by Section 4 of the Office of National Intelligence Act 2018.
The NIC includes:
- Office of National Intelligence (ONI)
- Australian Signals Directorate (ASD)
- Australian Security Intelligence Organisation (ASIO)
- Australian Secret Intelligence Service (ASIS)
- Australian Geospatial-Intelligence Organisation (AGO)
- Defence Intelligence Organisation (DIO)
- Australian Criminal Intelligence Commission (ACIC)
- the intelligence role and functions of the Australian Transaction Reports and Analysis Centre (AUSTRAC), Australian Federal Police (AFP), the Department of Home Affairs and the Department of Defence.
Defence and members of the NIC may voluntarily adopt elements of this policy where they are able to do so without compromising national security capabilities or interests.
Existing frameworks
The challenges raised by government use of AI are complex and inherently linked with other considerations, such as the APS Code of Conduct, data governance, cyber security, privacy and ethics practices.
This policy has been designed to complement and strengthen – not duplicate – existing frameworks, legislation and practices that touch on government’s use of AI.
This policy must be read and applied alongside existing frameworks and laws to ensure agencies meet all their obligations.
-
Version 2.0
-
Principles
- Adopt AI to enhance efficiency, decision-making, policy outcomes and government service delivery for the benefit of Australians.
- Have clear accountabilities for the adoption of AI and understand its use.
- Build public trust through transparency about government AI use.
Mandatory requirements
AI transparency statement
Agencies must make a publicly available statement outlining their approach to AI adoption and use, as prescribed under the Standard for transparency statements.
The statement must be reviewed and updated annually or sooner, should the agency make significant changes to its approach to AI.
Agencies must notify the DTA when they publish and make any changes to their AI transparency statement by emailing ai@dta.gov.au.
Strategic position on AI adoption
Agencies must develop a strategic position on AI adoption within 6 months of this policy taking effect. This position is to emphasise how AI opportunities can be identified and embraced by the agency.
Agencies must communicate their strategic position on AI to give staff clear direction on AI adoption. In line with their current and anticipated use of AI, agencies can develop a standalone AI strategy, augment an existing strategy or create other materials to communicate the approach to staff.
Accountable officials
Agencies must designate accountable official(s) to take accountability for implementing this policy.
Agencies must follow the Standard for accountability when designating accountable official(s) and implementing this requirement. The responsibilities of accountable officials are set in the standard.
Agencies must notify the DTA when they designate and make any changes to their accountable official(s) by emailing ai@dta.gov.au.
Accountable use case owners
Agencies must designate an accountable use case owner for each in-scope AI use case within 12 months of this policy taking effect. Accountable official(s) are to maintain a register of accountable use case owners.
Agencies must follow the Standard for accountability when implementing this requirement. The responsibilities of accountable use case owners are set in the standard.
Internal AI use case register
Agencies must create a register of in-scope AI use cases to enable accountable official(s) to record accountable use case owners within 12 months of this policy taking effect.
Agencies must share the register with the DTA every 6 months, commencing from when they create the register to meet the above requirement.
The Standard for accountability lists the minimum fields agencies must capture in the use case register. Agencies can add additional fields to meet their organisational needs. An existing register may be reused for the purposes of meeting this requirement. The standard also provides the instructions for how to share agency registers with the DTA.
-
Version 2.0
-
Preparedness and operations
The principles and requirements included in this section standardise key elements of AI governance that allow agencies to build AI capability and use AI responsibly.
Principles
- Protect Australians from AI harms.
- APS officers need to be able to explain, justify and take ownership of advice and decisions when using AI.
- AI capability built for the long term.
- Flexibility and adaptability to accommodate technological advances.
Mandatory requirements
Operationalise the responsible use of AI
Agencies must establish an approach to embed responsible AI practices within 12 months of this policy taking effect. This may vary according to the scale and scope of agency AI use.
At a minimum, the approach will provide an agency with:
- a process for adopting AI use cases in line with the implemented actions of this policy, as well as the agency's enterprise risk management and governance approach.
- a way to inform staff who are designing and implementing AI use cases about Australia's AI Ethics Principles.
- a pathway for staff to report AI safety concerns, including AI incidents.
- pathways for the public to report AI safety concerns, appropriate to the agency's AI use.
- clear processes to address AI incidents aligned to their ICT incident management approach - incident remediation must be overseen by an appropriate governance body or senior executive and should be undertaken in line with any other legal obligations.
Agencies may modify existing policies, procedures and frameworks, or create new ones. Smaller agencies with minimal AI adoption could amend existing documentation and/or assign key personnel to guide staff on responsible AI adoption on an ad hoc basis. Agencies with greater AI adoption could create dedicated AI policies, procedures and/or frameworks to support responsible adoption. Accountable officials are responsible for deciding the appropriate approach for their agency.
Staff training on AI
Agencies must implement mandatory training for all staff on responsible AI use within 12 months of this policy taking effect. Agencies should consider the Guidance for staff training on AI and can use the AI fundamentals training module to meet the requirement. They can use the module as provided, modify it, or incorporate it into an existing training program based on their specific context and requirements. Alternatively, agencies can allow their staff to access the module directly through APSLearn.
Agencies should implement additional training for staff as required, in consideration of their roles and responsibilities. For example, additional training for those responsible for the procurement, development, training and deployment of AI systems.
Recommended Actions
AI technical standard
It is strongly recommended that agencies apply the AI technical standard for Australian Government. The standard is designed for Australian Government agencies adopting AI. It embeds the principles of fairness, transparency, and accountability into a set of technical requirements and guidelines.
AI procurement guidance
It is strongly recommended that agencies refer to the Guidance on AI procurement in government when procuring AI products and services. The guidance offers practical, step-by-step advice to help agencies identify and manage AI-specific risks while maintaining procurement best practices.
Agencies should consider
Applying the generative AI guidance
Applying the Managing access to public generative AI tools guidance and the Using public generative AI tools safely and responsibly guidance.
Capability development
Developing staff AI capability to effectively use AI and comply with AI policy and regulation.
-
Version 2.0
-
AI use case impact assessment
The principles and requirements in this section intend to assess potential impacts of AI use cases and ensure additional oversight of higher risk AI.
Principles
- Ongoing monitoring and evaluation of AI uses.
- AI risk mitigation is proportionate and targeted.
- AI use is lawful, ethical, responsible, transparent and explainable to the public.
Mandatory requirements
All new AI use cases
Agencies must assess all new AI use cases against the in-scope criteria (Appendix C) to determine if they are in scope of the policy. The assessment must be documented and take place during the design phase while developing requirements.
Agencies must begin AI use case assessments within 12 months of this policy taking effect.
For existing use cases not yet assessed, agencies must determine whether they are in scope of this policy and apply all relevant policy actions by 30 April 2027.
Where practicable, agencies should implement the requirements ahead of the deadlines listed above.
In-scope AI use cases
For AI use cases that are in-scope, agencies must conduct an AI use case impact assessment. Agencies are to commence an assessment at the design stage. Before the solution is deployed, agencies must finalise the assessment and apply any agreed risk treatments.
Agencies may conduct an AI use case impact assessment by using either:
- the Australian Government AI impact assessment tool (the impact assessment tool)
- an internal process that integrates all provisions of the impact assessment tool.
Where an agency integrates the tool, they must ensure:
- the internal process is consistent
- it delivers the same (or a higher) risk outcome for inherent and residual risk.
Agencies must be able to revise their internal process in response to any impact assessment tool updates.
Agencies must add each in-scope AI use case to their internal register of AI use cases and update it as required. Include risk rating and accountable use case owner changes. When deploying an in-scope AI use case, agencies must:
- regularly monitor and evaluate their use case to ensure it is operating as intended and that risks are being effectively managed.
- re-validate the AI use case impact assessment by checking its accuracy and updating it when there is a material change in the use case scope, usage or operation.
Agencies should also monitor changes that are not initiated by the agency. For example, vendor changes and changes in the regulatory environment. Agencies could also ask vendors to provide information on updates through contractual mechanisms.
Medium-risk AI use cases
If an agency determines their in-scope AI use case has an inherent medium-risk rating when completing an AI use case impact assessment, they should consider if the use case would benefit from being governed through a designated board or a senior executive. Agencies should choose an approach appropriate for the size and scope of the agency if they apply additional governance.
High-risk AI use cases
If an agency determines their in-scope AI use case has an inherent high-risk rating when completing an AI use case impact assessment, they must:
- report the use case to the agency accountable official with the reasons for the inherent high-risk rating, proposed mitigations and residual risks
- govern the use case through a designated board or a senior executive, whichever is appropriate for the size and scope of the agency.
Once an agency has decided to deploy the use case, they must:
- report the use case to the DTA through the accountable official, see the Standard for accountability
- establish a system to regularly review the use case every 12 months at a minimum. The review must report to the relevant governing board or senior executive on whether the use case is operating as intended and that risks are being effectively managed. The review must also consider the AI use case impact assessment and revisions to it, if required.
Out-of-scope AI use cases
For use cases assessed as out of scope of this policy, agencies may adopt the use case while ensuring they comply with relevant existing obligations, such as privacy and security.
If an agency adopts an out-of-scope AI use case, they must assess whether the use case becomes in-scope of this policy if there is a material change in the scope, usage or operation of the solution.
If a use case is in scope, agencies must follow any applicable actions in this policy.
-
Version 2.0
-
Appendix B: Definitions
Artificial intelligence
While there are various definitions of what constitutes AI, for the purposes of this policy agencies are to apply the definition provided by the Organisation for Economic Co-operation and Development (OECD):
-
Version 2.0
-
Version 2.0
-
Version 2.0
-
Version 2.0
-
Version 2.0
-
Version 2.0
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.