Trust: Transparency, ethics and governance
AI adoption in the APS requires a strong authorising environment and the maintenance of public trust. AI tools should ultimately serve the public and should be used safely and responsibly by the public service, in line with the Digital Transformation Agency's (DTA) Policy for responsible use of AI in government and the Department of Industry, Science and Resource’s AI Ethics Principles.
The initiatives under this pillar build on existing security and privacy safeguards governing the use of ICT and data, including the:
- Protective Security Policy Framework (PSPF)
- Information Security Manual (ISM)
- Privacy Act 1988
- guidance from the Office of the Australian Information Commissioner (OAIC)
- Data Availability and Transparency Act
- guidance from the Office of the National Data Commissioner (ONDC).
They will also strengthen governance, improve transparency, and ensure clear and consistent communication across the APS and with the community.
AI in government policy and guidance updates
Providing clarity, enhance accountability and risk management
Lead agency: DTA
The government will update the Policy for responsible use of AI in government (the AI in government policy) to strengthen public trust in government by providing clarity, enhancing accountability, and ensuring effective risk identification and management in the use of AI. The AI in government policy took effect on 1 September 2024. It laid the foundation by introducing accountability measures, transparency requirements, and guidance for foundational training. As AI adoption across the APS accelerates and the technology landscape evolves, the DTA is updating this policy to include a broader set of AI governance practices that will support agencies to confidently adopt AI while building public trust.
The new requirements include requiring agencies to develop a strategic position on AI adoption and to communicate this position to staff. This will support agencies to better engage with and realise the benefits of AI. Accountability requirements will be strengthened so that each in-scope use case has a clearly assigned accountable officer and is recorded in an internal register. This strengthens and builds on the important role of the AI Accountable Official, a role mandated under the existing policy to ensure the requirements of the policy are met.
The update also builds trust in government use of AI through mandating the AI impact assessment tool for in-scope use cases, which targets governance and risk management actions towards higher-risk use cases.
AI Review Committee
Enhancing oversight and ensuring consistent, ethical deployment of AI
Lead agency: DTA
The government will establish an AI Review Committee to enhance whole-of-government oversight and ensure consistent, responsible deployment of AI across the APS. The committee will comprise experts from right across the APS, ensuring best practice approaches inform decision-making, drawing on the guidance and insights of the Australian Information Commissioner, Privacy Commissioner, Commonwealth Ombudsman and others who oversee government administration.
This committee will provide advice and non-binding recommendations to agencies on high-risk AI use cases. It will ensure decisions around sensitive or complex AI deployments are grounded in cross-disciplinary scrutiny, consider diverse voices, and uphold government AI safety.
Beyond case-by-case reviews, the committee may conduct deep dives into emerging AI risks and ethics issues. For example, if a future central AI use case register identifies a surge in deployments within a particular domain – such as predictive analytics in compliance or employment decisions – the committee could be tasked with providing targeted advice.
This function would enable early identification of systemic risks and support proactive guidance to agencies, including on remedies when things do not go to plan. The committee will also support responses and recommendations following serious AI incidents, and ensure lessons and available remedies are reflected in future proposals, supporting continuous improvement in government AI practices.
Clear expectations of external service providers
Service providers are responsible for their work when using AI
Lead agency: DTA
The Digital Transformation Agency’s Digital Sourcing ClauseBank includes optional clauses stating that service provider use of AI is approved by the buyer. The government will expand this approach by requiring all suppliers under the whole of government Management Advisory Services and People Panels to advise of any planned use of AI in the delivery of services when responding to requests for quotes.
The government will also add to the broader Commonwealth Contracting Suite and Clausebank clauses which clearly state that consultants and external contractors remain fully responsible for the services they deliver - regardless of whether generative AI is used in their development or delivery - and that ensure transparency and accountability in the use of generative AI technologies by external providers.
These will better equip agencies to assess risks and manage compliance throughout the procurement lifecycle, and meet their probity obligations under the Commonwealth Procurement Rules and the Policy for the responsible use of AI in government.
AI Strategic communication initiatives
Ensuring consistent, clear messaging on the safe and responsible adoption of AI
Lead agency: Finance
Clear and consistent communication about AI is essential to building trust and confidence across the APS. Staff need to understand what AI can be used for, what’s allowed, and how risks are being managed, as well as where to go for help and what to do when things don’t go to plan. This helps them feel confident, empowered, and supported in using AI safely and responsibly.
A centrally coordinated approach to these communications will ensure that all agencies are aligned with whole-of-government policies and expectations. It also supports transparency, because openness and consistency in how we talk about AI helps build trust across the APS and reinforces confidence in how decisions are made.
Strategic communication will play a key role in reinforcing existing consultation and engagement frameworks across the APS, such as provisions in Enterprise Agreements, agency consultative committees and the APS Consultative Committee. It will complement these processes by delivering consistent messaging, practical tools, and resources to help employees understand and adapt to the integration of AI in their work. It will also support the establishment of genuine consultation, ensuring transparency, building trust, and fostering workforce support for change.
Going forward: Earning and keeping trust with Australians
Generative AI offers new opportunities to improve how government serves Australians and to build trust through open and transparent engagement with communities. The government will guide AI use with a clear understanding of Australians’ diverse needs, incorporating ongoing insights from implementation, and carefully considering where and how AI is appropriate and what is fundamental to responsible use. As new uses and applications emerge, the government will ensure that the guardrails are appropriate and fit-for-purpose so that our uses are ethical, moral, legal and people-first.