-
The AI Plan for the APS positions the government to improve service delivery, policy outcomes, efficiency and productivity by substantially increasing the safe and responsible use of AI in government.
-
Strategic alignment
-
Data and Digital Government Strategy
This plan supports progress toward the government’s 2030 vision to deliver simple, secure and connected public services, for all people and businesses, through world-class data and digital capabilities.
Off -
APS Reform Agenda
This plan aligns with and builds on the APS Reform agenda by embedding digital skills in the workforce and driving sustainable AI adoption to build a stronger APS that delivers better outcomes for the community.
Off -
Next page: Meeting the Digital Service Standard
-
-
-
These targets are based on an 18-month timeframe. Broadly, these align with two overarching milestones:
- July 2026 (0-12 months)
- July 2026 onwards (12+ months).
Individual initiatives also contain their own targets and measures (see Appendix A).
As an iterative plan, targets and delivery schedules will be updated to account for new developments and emerging opportunities.
-
- Everyone in the APS completing training on the fundamentals of AI use in the APS and having access to guidance to use AI safely and securely
- All public servants having access to generative AI tools
- Each agency and department appointing a senior executive as Chief AI Officer
- All agencies tracking and reporting their AI use
-
APS AI Plan implementation timeline
-
Image description
APS AI Plan timeline from July 2025 to December 2026 showing milestones for the 15 initiatives across the Trust, People, and Tools pillars.
Under Trust: AI in government policy and guidance updates (July to December 2025), AI Review Committee (December 2025 to December 2026), Clear expectations of external service providers (December 2025 to December 2026), and AI strategic communications (July 2025 - December 2026).
Under People: Foundational learning (July 2025 - December 2026), Staff consultation and engagement (July 2025 - December 2026), AI delivery and enablement (December 2025 to December 2026), and Chief AI Officers (Dec 2025 to December 2026).
Under Tools: GovAI open trial (July to October 2025), GovAI: Centrally hosted AI services (November 2025 to December 2026), GovAI Chat (December 2025 to December 2026), Guidance on public and enterprise AI services (July to December 2025), Support for AI tool procurement (December 2025 to July 2026), Re-using intellectual property (December 2025 to December 2026), Central register of generative AI assessments (December 2025 to July 2026), and New whole-of-government cloud policy (November 2025 to March 2026).
Off -
-
-
Australia's AI ambition
To strengthen Australia’s economy, society and security, the Government’s vision for AI in Australia focuses on:
- capturing the opportunities of AI
- ensuring the benefits are shared widely, and
- keeping Australians safe.
-
Many AI initiatives are already underway, with progress being made against these objectives. However, to date, the adoption of generative AI across government has been inconsistent. There are varying levels of AI maturity between – and even within – agencies.
AI maturity in the APS is a journey: starting with no formal adoption, progressing through leadership engagement and foundational capability, advancing to data-driven improvements in services, and advancing to AI becoming standard practice throughout government.
Agencies are also facing a complex mix of uncertainties and risks in dealing with generative AI. There are questions around the appropriate use of AI, and the advent of the latest AI developments and the speed of its advancements has heightened the need to manage privacy, cyber security and sovereignty risks. The rapidly changing digital environment will also likely introduce new risks and considerations. The plan seeks to get the balance right – capturing the opportunities and maximising the benefits, while minimising harms and mitigating risks.
Governance, training and communications initiatives already delivered
- Technical standard for government’s use of AI
- Guidance on the use of OFFICIAL information in public generative AI tools
- Sponsored GovHack 2025
- AI in government fundamentals eLearning (available for all staff)
- AI Government Showcase of AI use-cases with industry & academia
- GovAI applied eLearning
- APS Academy MasterCraft webinars for new users
- APS Academy Lunch and Learn webinars on AI use cases in the APS and how to implement AI in APS organisations delivered in partnership with GovAI
-
Governance, training and communications initiatives already delivered
- Let’s also get our communication right and tell our colleagues and peers what an icon is and, just as important, what it’s not and how often to use them.
- I borrow a similar armour or weapon rarity scale from your favourite RPG or shooter game.
- Another good rule to follow is that of the content it sits next to, a 16x16px icon will be lost in a hero image, large copy or large spaces. Likewise, a detailed illustration will overshadow small and short copy.
-
Going forward: Safe and secure data for Australians
As the government adopts and uses more AI, maintaining the security and safety of Australians’ data will be critical. Consistent with existing information security and data protection frameworks and practices, the government will take a proactive approach throughout AI implementation to give Australians confidence their data is protected.
-
Domestic violence survivor-victims
Provide visibility of who has access
Consider providing users with a clear and easily accessible list of who can access the service and who will be notified of any changes (e.g.
change of address). Give users the choice of when and how they receive government communications and make it easy to change, in the event they need to do it quickly.Make it easy to remove multiple users
Support survivor-victims to remove multiple users from accessing a shared government service or account. Consider how a user can do this
in a privacy enhancing way, so as not to unnecessarily trigger or notify other users.Support ‘quick exit’
Consider the use of ‘quick exit’ buttons within your digital service to help re-direct users to other digital pages if they are in an unsafe environment.
Clearly communicate tasks and actions
Only request information that is legislatively required and avoid unnecessary Use simple steps and actions to clearly communicate what is required and limit the impacts on survivor-victims. Consider the use of checklists and easy to follow formats to avoid decision fatigue and to support the survivor-victims to complete the service.
Off -
Related materials
-
Staff consultation and engagement
Embedding genuine staff and union engagement in AI-related APS changes
Lead agency: APSC
Meaningful consultation with staff and unions will be critical to building trust in AI adoption across the APS. It will ensure employees have a voice in how AI is introduced, how to get the benefits, what problems can be solved with AI and where it is likely to have a significant effect or material impact on them, including impacts related to gender, cultural identity, and First Nations peoples.
To support this, the APSC will issue a Circular setting out clear standards for consultation on AI-related workplace changes. These standards will align with existing obligations in APS Enterprise Agreements and specifically address the use of AI in the APS. The Circular will complement existing engagement frameworks across the APS, such as Agency Consultative Committees, which enable inclusive and representative input from employees and unions. These mechanisms support meaningful input from employees and unions, particularly ahead of major workplace changes. Genuine and effective consultation generally involves providing employees and their relevant union with a genuine opportunity to influence the decision prior to it being made.
-
AI delivery and enablement (AIDE)
Central team accelerating AI adoption
Lead agency: Finance
The Australian Government will establish a central function, AI delivery and enablement (AIDE), to take the lead in accelerating the uptake of safe and effective AI which will ensure timely adoption and more efficient government services. This multidisciplinary team will complement, but not replace, existing whole-of-government structures and processes with a dedicated focus on adoption through helping tackle common adoption barriers, navigating and reducing the complex compliance uncertainties raised by AI, and identifying and sharing lessons. Through their work, the team could help bring core work back into the public service.
Recognising that early adopters face challenges and delays, this team will help to expose and question existing assumptions and processes that might be inadvertently slowing adoption. It will explore the implications for public services raised by first-movers to inform future implementations. It will capture use-cases that can inform broader opportunities for whole-of-government adoption of AI and promote re-use of solutions. The team will leverage the enthusiasm and skills existing across the APS to help it understand emerging issues early on and to help pull together guidance on what works – and what does not. Noting the fast-moving nature of the technology, the team will take a flexible and iterative approach informed by learning from across the APS. For this reason, the team will also have responsibility for ensuring the effective implementation of this plan.
-
Chief AI Officers
Accelerating adoption, driving cultural change, connecting agencies
Lead agency: Finance
Agencies will appoint Chief AI Officers in recognition of the fundamental shift that generative AI is bringing to government operations. These senior leaders will accelerate consistent and collaborative AI capability development across the APS, identifying where AI can meaningfully improve Australians’ lives through faster service delivery, better-targeted policy interventions, and more efficient allocation of resources.
Chief AI Officers will drive adoption and advocate for strategic change within their agencies. Their responsibilities will include leading internal engagement, sharing guidance and use cases, providing contestable advice, and overseeing AI adoption, experimentation, and innovation. Chief AI Officers will have the mission and authority to effect change, and will focus on enabling strategic uptake and innovation.
Chief AI Officers are responsible for leading the required change in their agencies, while AI Accountable Officials are responsible for the governance required to comply with the AI in government policy. Agencies will have flexibility to determine who best in their structure meets the needs of the Chief AI Officer role. Some agencies, such as smaller organisations, may opt to have both the Chief AI Officer and Accountable Official roles fulfilled by the same leader. Chief AI Officers will be supported by early adopters and experts within their agencies who have hands-on experience applying AI in their work, whether for personal efficiency, specific job-related functions, or workflow integration. A peer working group will develop shared training materials for distribution via platforms such as GovAI, APS Professions and the APS Academy. This network will build on the existing AI Community of Practice, serving as a forum to provide feedback on common, reusable AI use cases, ways of working and strategies. Chief AI Officers will convert AI’s potential into demonstrable improvements in government performance, driving the capabilities and collaborative approaches needed to deliver for Australians.
-
Re-using intellectual property
Re-using solutions, ensuring visibility, removing duplication
Lead agency: Finance
GovAI will provide a platform for making intellectual property (IP) discoverable and reusable across the APS, which will reduce duplication, reduce costs, and accelerate knowledge sharing within the APS. Agencies frequently procure valuable IP from consultancies or develop their own IP including strategies, code for scenario analysis, and applications (many of which will be AI empowered). But these materials are often invisible across government. As a result, agencies approach the market at significant expense for work that already exists. The platform will be consistent with the intellectual property principles for Commonwealth entities.
Making IP easily discoverable will allow agencies and centres of excellence (such as Australian Government Consulting) to efficiently synthesise insights from existing contract materials, approaches and reports, and consolidate Commonwealth-owned outputs. With tailored support to align materials with each agency’s needs, the APS could strengthen internal capability and reduce dependence on external contractors.
The existing GovAI Use Case Library, which already includes 20 detailed AI use cases from across the APS, could be augmented to support this expanded capability. This could also learn from similar models, like the United Kingdom’s i.AI Incubator with its open-source suite of tools on GitHub. As well as reuse of IP, having a common disclaimer for collecting data will ensure transparency around the government’s use of that data with AI tools.
Consistent with the Data and Digital Government Strategy, agencies will make non-sensitive government data open by default. Opening and sharing these data assets enables AI development that supports evidence-based decision-making, drives productivity across sectors, and delivers better outcomes for people and business.
-
Central register of generative AI assessments
Sharing and aligning assessments to avoid duplication
Lead agency: Finance
The government will create a centralised register, hosted on GovAI, for completed assessments for AI systems and services. For instance, this could include those conducted for Foreign Ownership, Control or Influence (FOCI) risk assessments, Information Security Registered Assessors Program (IRAP) assessments, cyber-security assessments of systems, and relevant impact assessments. This will help streamline procurement and deployment processes across the APS. Sharing of already completed processes will allow agencies to reference and reuse existing evaluations, speeding adoption. Agencies will be able to access prior assessments of platforms and tools conducted by other departments, not only saving time but also ensuring alignment with all security requirements across agencies.
The framework and documentation will be established in coordination with lead policy bodies such as the DTA and Department of Finance, and in close consultation with relevant bodies such as the Australian Signals Directorate (ASD) and the Department of Home Affairs. While agencies are ultimately responsible for ensuring compliance and risk management, sharing creates a starting foundation to achieve efficiencies and reduce duplicated effort where possible.
-
New whole-of-government cloud policy
Driving cloud adoption, unlocking greater AI potential
Lead agency: DTA
Cloud adoption is critical for agencies to unlock the full potential of AI through enhancing service delivery, enabling real-time data access, and supporting advanced analytics. The government will develop a new whole-of-government cloud policy to support responsible AI use in government by ensuring agencies can securely and efficiently leverage cloud infrastructure.
The policy will set clear requirements for cloud use, guiding agencies to accelerate cloud uptake while maintaining compliance with protective security standards and uplifting workforce capability. It will also address legacy migration challenges, ensuring that transitions are managed safely and efficiently.
-
AI Review Committee
Enhancing oversight and ensuring consistent, ethical deployment of AI
Lead agency: DTA
The government will establish an AI Review Committee to enhance whole-of-government oversight and ensure consistent, responsible deployment of AI across the APS. The committee will comprise experts from right across the APS, ensuring best practice approaches inform decision-making, drawing on the guidance and insights of the Australian Information Commissioner, Privacy Commissioner, Commonwealth Ombudsman and others who oversee government administration.
This committee will provide advice and non-binding recommendations to agencies on high-risk AI use cases. It will ensure decisions around sensitive or complex AI deployments are grounded in cross-disciplinary scrutiny, consider diverse voices, and uphold government AI safety.
Beyond case-by-case reviews, the committee may conduct deep dives into emerging AI risks and ethics issues. For example, if a future central AI use case register identifies a surge in deployments within a particular domain – such as predictive analytics in compliance or employment decisions – the committee could be tasked with providing targeted advice.
This function would enable early identification of systemic risks and support proactive guidance to agencies, including on remedies when things do not go to plan. The committee will also support responses and recommendations following serious AI incidents, and ensure lessons and available remedies are reflected in future proposals, supporting continuous improvement in government AI practices.
-
Clear expectations of external service providers
Service providers are responsible for their work when using AI
Lead agency: DTA
The Digital Transformation Agency’s Digital Sourcing ClauseBank includes optional clauses stating that service provider use of AI is approved by the buyer. The government will expand this approach by requiring all suppliers under the whole of government Management Advisory Services and People Panels to advise of any planned use of AI in the delivery of services when responding to requests for quotes.
The government will also add to the broader Commonwealth Contracting Suite and Clausebank clauses which clearly state that consultants and external contractors remain fully responsible for the services they deliver - regardless of whether generative AI is used in their development or delivery - and that ensure transparency and accountability in the use of generative AI technologies by external providers.
These will better equip agencies to assess risks and manage compliance throughout the procurement lifecycle, and meet their probity obligations under the Commonwealth Procurement Rules and the Policy for the responsible use of AI in government.
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.