-
Future-proofing the assessment
For example, how can the impact assessment address future ubiquitous AI functions increasingly integrated into existing software?
This could include current software providers adding AI functions to existing software products, giving buyers limited scope to opt out or shape AI governance.
Participants also raised questions around assuring general purpose AI tools (e.g. Copilot) that could theoretically generate a number of as-yet-unknown discrete use cases. Is it feasible to ensure every use case is assessed, if teams or individual officers are creating niche use cases using general AI tools?
Proposed response
While some of the updates outlined above will address aspects of this (e.g. specifying that material changes such as new AI functions should trigger use case assessment), further consideration is required to address aspects of this feedback.
Will consult relevant experts to develop options, which may be added to guidance. Some of these issues may also be addressed through other resources, e.g. procurement advice, which will be referenced in the impact assessment guidance.
Off -
-
-
Key feedback themes and proposed responses
Key themes uncovered from pilot participant interviews and survey responses are summarised below, with proposed actions to respond to these.
-
Foundational learning
Build capability, improve confidence, support experimentation
Lead agency: APSC
The government will build the foundational capability of public servants to use AI responsibly, ethically and effectively. The approach to building capability will be taken alongside addressing the role of leaders in shaping AI adoption.
A foundational AI literacy training offering will be mandated for all staff through the AI in Government Policy update. This will be supported by practical training such as the GovAI interactive learning, resources (website, newsletters), and live webinars with public servants experienced with use of AI. The aim is to provide all public servants with capability foundations together with flexible, just-in-time learning to keep pace with rapid AI technological change and be confident in using AI responsibly and effectively.
Supporting leaders to provide safe and responsible adoption environments for staff will also be a focus. Regular information on leading organisations in AI adoption, and dedicated masterclasses will be provided to support senior leaders in this task.
In addition, communities of practice and peer learning will be implemented over time to embed capability and drive sustainable adoption, including through the Chief AI Officers initiative.
-
Going forward: Continual learning and adapting
The plan provides a strong foundation for achieving broad AI literacy across the APS. Ongoing training will support staff in their continual and iterative learning journey as more is discovered about how best to use AI to get better outcomes, and what it means for how we work. Ongoing staff engagement and consultation will help agencies to adapt, manage change effectively, and consider the impacts on employees, particularly women and First Nations peoples.
-
Going forward: Earning and keeping trust with Australians
Generative AI offers new opportunities to improve how government serves Australians and to build trust through open and transparent engagement with communities. The government will guide AI use with a clear understanding of Australians’ diverse needs, incorporating ongoing insights from implementation, and carefully considering where and how AI is appropriate and what is fundamental to responsible use. As new uses and applications emerge, the government will ensure that the guardrails are appropriate and fit-for-purpose so that our uses are ethical, moral, legal and people-first.
-
Thus much I apprehend is sufficient for the consideration of general lengths to breadths. Where, by the way, I apprehend I have plainly shewn, that there is no practicable rule, by lines, for minutely setting out proportions for the human body, and if there were, the eye alone must determine us in our choice of what is most pleasing to itself.
-
I must here again press my reader to a particular attention to the windings of these superficial lines, even in their passing over every joint, what alterations soever may be made in the surface of the skin by the various bendings of the limbs: and tho’ the space allowed for it, just in the joints, be ever so small, and consequently the lines ever so short, the application of this principle of varying these lines, as far as their lengths will admit of, will be found to have its effect as gracefully as in the more lengthened muscles of the body.
-
GovAI: Centrally hosted AI services
Technical infrastructure providing central AI tools and model brokerage services, preventing vendor lock-in
Lead agency: Finance
The government will leverage GovAI as a centralised AI hosting service to provide agencies a secure, Australian-based platform for developing customised AI solutions at low cost. By incorporating predefined guardrails, GovAI ensures that security and privacy remain paramount throughout the development process.
GovAI will include a use case library and a vendor agnostic platform with a model selection option based on need, enabling agencies to access a diverse range of AI models for their own development – including an onshore instance of OpenAI’s GPT models – without negotiating individual arrangements with commercial vendors. The inclusion of additional onshore models would further strengthen Australia’s data sovereignty, reduce technical barriers, and deliver measurable cost and time efficiencies across government.
Within a technology-agnostic framework, GovAI allows teams to engage with tools from multiple industry providers, mitigating the risks associated with vendor lock-in and technological obsolescence. Promoting the use of GovAI also minimises duplication, fosters shared learning, and accelerates both capability uplift and delivery timelines.
As a foundational technical service, GovAI will provide the necessary infrastructure and technical skills to develop, test, and support secure access to generative AI alongside customised agency-specific solutions and other whole-of-government applications.
-
Initiative 2 goes here
Secondly, that general idea, now to be discussed, which we commonly have of form altogether, as arising chiefly from a fitness to some designed purpose or use.
Lead agency: Department of Government
Surely, such determinations could not be made and pronounced with such critical truth, if the eye were not capable of measuring or judging of thicknesses by lengths, with great preciseness. Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter.
Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter.
-
Initiative 3 here
Secondly, that general idea, now to be discussed, which we commonly have of form altogether, as arising chiefly from a fitness to some designed purpose or use.
Lead agency: Department of Government
Surely, such determinations could not be made and pronounced with such critical truth, if the eye were not capable of measuring or judging of thicknesses by lengths, with great preciseness. Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter.
Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter.
-
Secondly, that general idea, now to be discussed, which we commonly have of form altogether, as arising chiefly from a fitness to some designed purpose or use.
Lead agency: Department of Government
-
Surely, such determinations could not be made and pronounced with such critical truth, if the eye were not capable of measuring or judging of thicknesses by lengths, with great preciseness. Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter.
Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter.
-
1. Mandate AI use case impact assessment – with some flexibility
1.1. Update the Policy for the responsible use of AI in government (the AI policy): Introduce mandatory AI use case governance actions. This will require agencies to conduct an AI impact assessment for use cases that meet certain criteria.
1.2. Provide agencies with flexibility to integrate the AI impact assessment into their own governance processes, depending on specific agency needs and capacity: For example, agencies could be required to conduct a threshold assessment, using sections 1-3, for all use cases. Alternatively, they may be required to complete a full assessment (sections 4-11) for use cases with elevated risks identified at the threshold assessment stage. For this, agencies could either:
- Complete the full assessment as a standalone process using the assessment tool documentation, as tested through the pilot.
- Integrate sections 4-11, in full or in part, into existing governance processes.
This flexible, hybrid approach may be appropriate for agencies with governance mechanisms that already address some or all the requirements in sections 411. These agencies may prefer to adapt their existing processes to integrate section 411 requirements, rather than completing a separate AI impact assessment that overlaps with or duplicates existing processes.
Off -
2. Update and strengthen risk assessment
2.1. Update the threshold risk assessment (section 3): Include more objective questions that guide officials to correctly identify and assess relevant risks. Consult government risk management experts for feedback on proposed updates.
2.2. Require assessment officers to record pre-mitigation inherent risk level as well as post-mitigation treated risk.
2.3. Update supporting guidance on risk assessment: Consider including examples and references to any relevant external resources.
Off -
3. Clarify scope of legal review section
3.1. Consider options to update legal review step (section 11.1) to specify legal aspects of AI use case that need to be reviewed: This update will include addressing pilot feedback, including suggestions to reframe the legal review step as a series of targeted questions. This could help clarify the scope of the required legal review, focusing on ensuring lawfulness and compliance with relevant legal frameworks, instead of current open-ended question
3.2. Consider how AI governance processes in other Australian and overseas jurisdictions incorporate legal review.
Off -
4. Other assessment tool improvements
4.1. Align assessment tool provisions with proposed AI policy updates and ensure AI policy updates consider relevant pilot feedback, including calls for:
- further guidance on the definition of AI and the ‘covered use case’ criteria – to be addressed in the AI policy itself
- further guidance on the timing for an initial assurance assessment and subsequent reassessment.
4.2. Explore options to develop a digital assessment tool, while retaining an ‘offline’ document version for agencies that indicated a preference for this option.
4.3. Address additional pilot feedback in updated assessment tool and guidance documents, while ensuring continued alignment with AI in government and broader AI policy developments. For further detail, see Key findings, above, and Key feedback themes and proposed responses in the Context, data and rationale section of this report.
Off -
-
-
Recommendations
-
GovAI Chat
Universally accessible and secure chat tool
Lead agency: Finance
The government will provide access to secure generative AI, through GovAI Chat, for everyone in the Australian Public Service. GovAI Chat will provide the capabilities of modern generative AI tools, while allowing users to leverage government data. Responses will be fast, current, and auditable through familiar and secure interfaces. Where smaller agencies may lack the scale and resourcing to adopt native AI tools, this will support staff across all agencies to safely and effectively experiment with AI tools and integrate them within their workflows.
The underlying framework will be flexible to respond to user needs and feature requests as technology evolves. It will be rolled out in an iterative fashion to allow for extensive and ongoing user testing with users and teams, across roles and agencies to ensure it meets practical needs.
This will increase the capability of the APS to deliver more for the Australian public, while maintaining standards and complying with legal frameworks for cyber security and data sovereignty. The result will be a safer and smarter way to work that supports operational integrity and delivers value through time saved and reduced rework.
-
Guidance on public and enterprise AI services
Clarity and consistency on using public LLMs up to OFFICIAL
Lead agency: DTA/Home Affairs
The government will develop clear guidance on the use of public generative AI tools to give public servants confidence to use platforms like ChatGPT, Claude, and Gemini up to OFFICIAL level information. The Department of Home Affairs will issue policy guidance to outline requirements for using these tools via web browsers. DTA will also update its existing guidance on the use of public generative AI to reflect this and include practical examples to support responsible use.
The work will help shift the risk appetite across agencies and enable staff to access general-purpose AI tools at no cost. Each agency will need to implement the changes independently and should consider using safeguards like upload blockers to manage risks around classified or personal information or other sensitive information.
-
Support for AI tool procurement
Speeding processes, sharing information, and assisting agencies
Lead agency: DTA
The government will continue to enhance AI procurement pathways to make it easier for agencies to access trusted AI products and services in line with government standards. This will help address concerns from agencies that are cautious about adopting tools when changes like added generative AI features are introduced without prior notice or clear guidance. DTA is introducing AI-specific subcategories within procurement panels like BuyICT and Digital Marketplace, helping agencies identify vendors with proven capabilities and adopt AI tools consistently and quickly. This also helps ensure alignment with the government’s technical and ethical standards, reducing duplication and accelerating the safe adoption of AI solutions across the APS.
The DTA is also developing guidance to support AI procurement across government. The guidance will include a practical procurement checklist and aims to help agencies find and manage AI-related risks while supporting procurement best practices. The Department of Finance and the DTA are working together to ensure that current procurement arrangements remain fit-for-purpose for a rapidly evolving supplier ecosystem and new business offerings.
Women are significantly underrepresented across STEM and mixed STEM occupations. As government plays a key market-shaping role, future consideration will be given to better integrating gender equity objectives in AI or STEM-related procurement to ensure public investment aligns with both productivity and social equity goals, which will help deliver on objectives of Working for Women: A Strategy for Gender Equality.
Additionally, procurement processes, risk, and security assessments are often duplicated across agencies due to low visibility. AI adoption is slowed and disincentivised where agencies are faced with these processes.
-
To better serve the public, every public servant will have:
- foundational training and capability supports to use generative AI tools safely, responsibly and effectively
- access to generative AI tools
- clear guidance on how to use these tools responsibly
- leadership and support from Chief AI Officers to promote adoption
- opportunities to collaborate, build on and reuse work of others.
-
What we plan to achieve
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.