Section 3.1 – Risk assessment

Feedback indicated the risk assessment process is too subjective and overly dependent on assessing officer's value judgments and background knowledge.  

Responses to this section differed significantly depending on the assessing officer’s familiarity with the AI risk landscape, with less experienced officers tending to underrate or overlook risks they weren’t aware of. Some users reported challenges using section 3 as a high-level record of post-treatment risks and planned mitigations as it does not specify how to assess inherent risk and select mitigations.

Proposed response

Particular attention will be focused on updating this section, drawing on pilot feedback, emerging best practices and expert advice. Updates could include:  

  • adding more objective questions, considering particular risks that may be higher priority or less well understood
  • considering developing a checklist of use case features that would automatically elevate risk above ‘low’
  • adding examples/features illustrating low/medium/high risk use cases to the guidance
  • recommending assessing officers consult experts and peers to review risks as a team
  • adding fields to document inherent, pre mitigation risks
  • further emphasising that the assessment process does not replace agency risk management processes.
Off
Section 3.3 – Executive sponsor endorsement

Participants reported some confusion on the division of responsibilities for completing the assessment and providing endorsement. For example, should the executive sponsor come from the business area or ICT area; and (how) should the agency’s AI accountable official be involved?

Pilot participants that submitted assessments to their CIO/equivalent found sign-off relatively straightforward, while those that submitted to the policy/business area SES found it more challenging, requiring additional briefing and time.  

Proposed response

Consider more specific guidance for the executive officers signing off on the assessment. Consider providing flexibility for combined sign-off – e.g. could CIO/AO (or their delegate) sign off on technical aspects, while business area to confirm alignment with policy objectives?

Off
Section 11.1 – Legal review

Pilot participants advised their legal teams struggled with this section and would be reluctant to provide sign-off, which could be a blocker. Some legal teams advised they would need to procure external legal advice.  

No participants secured formal legal review as part of the pilot due to legal teams’ resource constraints and turnaround times (usually require at least 2-3 weeks and longer for complex requests, especially those requiring external advice).  

Proposed response

Updates will clarify purpose of legal review. Consider including more specific guidance on the questions the assessment officer should ask of their legal team.  

List examples of the types of legal risks to consider – for example, 'If you don't undertake legal review, you might miss/overlook XYZ.' Some pilot participants provided useful detailed feedback on this aspect, including feedback from their legal teams.  

Off
Sections 11.3 and 11.4 external review suggestion

Regarding the requirement at section 11.3 for internal governance review, participants sought advice on identifying or creating a suitable internal governance review body and suggested including specific guidance defining the scope and objectives for this process.  

Some agencies questioned the inclusion of an external review body at section 11.4 for high-risk use cases, querying how this would work in practice – while recognising this is only a suggestion and not mandatory.  

Participants did not indicate a strong desire for the government to establish a new external review body. No high-risk use cases were tested through the pilot, and the volume of future demand for review of high-risk cases remains unclear.  

Proposed response

Some aspects of this will be covered in the AI policy update, including advice to reuse an existing governance body where possible. Will consider other jurisdictions' settings and experiences with external review processes. If government decides to establish a central external review body, would need to consider:  

  • membership – including private sector/community/academic experts, how to manage probity and information security concerns
  • governance and processes – who would chair; how would the body make decisions (consensus/majority voting); how frequently would it meet?
  • direct costs – including member fees, travel, other meeting costs
  • other resource implications – agency staffing to support the body.  
Off
Skills and capability considerations

Some participants sought more guidance to specify the types of roles, expertise and skills they should consult on specific sections – including internally (e.g. risk/assurance teams, CIO, data governance), and/or external expertise (e.g. human rights, cyber experts).  

In particular, participants felt there must be diversity in team skills, background and experience, and that ideally business areas should lead the assessment but with significant support and input. Participants noted AI adoption presents certain sociotechnological issues different from traditional ICT projects.  

The pilot assessment tool mentions contributing officers should be ‘sufficiently trained’, however some participants called for more guidance on defining this.  

Proposed response

Updates to address this feedback could include:  

  • emphasising recommendation to consult widely during assessment and involve diverse expertise.
  • referring to APS job families, SFIA or other relevant frameworks.

However, overly prescribing the roles and skills required to complete assessment could make AI adoption more challenging, especially for smaller agencies with limited resources and fewer staff with specific expertise. Will also consider suggestions to commission experts to develop other supporting resources, for example on human rights considerations – dependent on resourcing.  

Off
Governance considerations

Some agencies perceived the impact assessment to be recommending the establishment of entirely new AI-specific governance processes, and that alignment with existing processes was challenging. Other agencies found integrating with existing governance processes relatively straightforward.

Proposed response

Consider bolstering guidance for assessing officers on aligning AI impact assessment with existing internal (and broader APS) governance settings, policies, procedures, oversight mechanisms – while emphasising the assessment does not replace existing obligations.  

Off

Back to the Digital Service Standard

Checklist or one-page overview

Several participants suggested something like a one-page overview could be useful, especially for executive sponsors providing sign-off.

Proposed response

Need to consider what this would look like and where it would fit within (or alongside) the current impact assessment. Overly simplifying may lead to a superficial, 'tick box' approach. A digital tool may streamline this – e.g. auto-generating a summary.  

Off
Future-proofing the assessment

For example, how can the impact assessment address future ubiquitous AI functions increasingly integrated into existing software?

This could include current software providers adding AI functions to existing software products, giving buyers limited scope to opt out or shape AI governance.  

Participants also raised questions around assuring general purpose AI tools (e.g. Copilot) that could theoretically generate a number of as-yet-unknown discrete use cases. Is it feasible to ensure every use case is assessed, if teams or individual officers are creating niche use cases using general AI tools?  

Proposed response

While some of the updates outlined above will address aspects of this (e.g. specifying that material changes such as new AI functions should trigger use case assessment), further consideration is required to address aspects of this feedback.  

Will consult relevant experts to develop options, which may be added to guidance. Some of these issues may also be addressed through other resources, e.g. procurement advice, which will be referenced in the impact assessment guidance.  

Off

Key feedback themes and proposed responses

Key themes uncovered from pilot participant interviews and survey responses are summarised below, with proposed actions to respond to these.

Foundational learning

Build capability, improve confidence, support experimentation

Lead agency: APSC

The government will build the foundational capability of public servants to use AI responsibly, ethically and effectively. The approach to building capability will be taken alongside addressing the role of leaders in shaping AI adoption.

A foundational AI literacy training offering will be mandated for all staff through the AI in Government Policy update. This will be supported by practical training such as the GovAI interactive learning, resources (website, newsletters), and live webinars with public servants experienced with use of AI. The aim is to provide all public servants with capability foundations together with flexible, just-in-time learning to keep pace with rapid AI technological change and be confident in using AI responsibly and effectively.

Supporting leaders to provide safe and responsible adoption environments for staff will also be a focus. Regular information on leading organisations in AI adoption, and dedicated masterclasses will be provided to support senior leaders in this task.

In addition, communities of practice and peer learning will be implemented over time to embed capability and drive sustainable adoption, including through the Chief AI Officers initiative.

Going forward: Continual learning and adapting

The plan provides a strong foundation for achieving broad AI literacy across the APS. Ongoing training will support staff in their continual and iterative learning journey as more is discovered about how best to use AI to get better outcomes, and what it means for how we work. Ongoing staff engagement and consultation will help agencies to adapt, manage change effectively, and consider the impacts on employees, particularly women and First Nations peoples.

Going forward: Earning and keeping trust with Australians

Generative AI offers new opportunities to improve how government serves Australians and to build trust through open and transparent engagement with communities. The government will guide AI use with a clear understanding of Australians’ diverse needs, incorporating ongoing insights from implementation, and carefully considering where and how AI is appropriate and what is fundamental to responsible use. As new uses and applications emerge, the government will ensure that the guardrails are appropriate and fit-for-purpose so that our uses are ethical, moral, legal and people-first.

Thus much I apprehend is sufficient for the consideration of general lengths to breadths. Where, by the way, I apprehend I have plainly shewn, that there is no practicable rule, by lines, for minutely setting out proportions for the human body, and if there were, the eye alone must determine us in our choice of what is most pleasing to itself. 

I must here again press my reader to a particular attention to the windings of these superficial lines, even in their passing over every joint, what alterations soever may be made in the surface of the skin by the various bendings of the limbs: and tho’ the space allowed for it, just in the joints, be ever so small, and consequently the lines ever so short, the application of this principle of varying these lines, as far as their lengths will admit of, will be found to have its effect as gracefully as in the more lengthened muscles of the body.

GovAI: Centrally hosted AI services

Technical infrastructure providing central AI tools and model brokerage services, preventing vendor lock-in

Lead agency: Finance

The government will leverage GovAI as a centralised AI hosting service to provide agencies a secure, Australian-based platform for developing customised AI solutions at low cost. By incorporating predefined guardrails, GovAI ensures that security and privacy remain paramount throughout the development process.

GovAI will include a use case library and a vendor agnostic platform with a model selection option based on need, enabling agencies to access a diverse range of AI models for their own development – including an onshore instance of OpenAI’s GPT models – without negotiating individual arrangements with commercial vendors. The inclusion of additional onshore models would further strengthen Australia’s data sovereignty, reduce technical barriers, and deliver measurable cost and time efficiencies across government.

Within a technology-agnostic framework, GovAI allows teams to engage with tools from multiple industry providers, mitigating the risks associated with vendor lock-in and technological obsolescence. Promoting the use of GovAI also minimises duplication, fosters shared learning, and accelerates both capability uplift and delivery timelines.

As a foundational technical service, GovAI will provide the necessary infrastructure and technical skills to develop, test, and support secure access to generative AI alongside customised agency-specific solutions and other whole-of-government applications.

Initiative 2 goes here

Secondly, that general idea, now to be discussed, which we commonly have of form altogether, as arising chiefly from a fitness to some designed purpose or use.

Lead agency: Department of Government

Surely, such determinations could not be made and pronounced with such critical truth, if the eye were not capable of measuring or judging of thicknesses by lengths, with great preciseness. Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter. 

Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter.

Initiative 3 here

Secondly, that general idea, now to be discussed, which we commonly have of form altogether, as arising chiefly from a fitness to some designed purpose or use.

Lead agency: Department of Government

Surely, such determinations could not be made and pronounced with such critical truth, if the eye were not capable of measuring or judging of thicknesses by lengths, with great preciseness. Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter. 

Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter.

Secondly, that general idea, now to be discussed, which we commonly have of form altogether, as arising chiefly from a fitness to some designed purpose or use.

Lead agency: Department of Government

Surely, such determinations could not be made and pronounced with such critical truth, if the eye were not capable of measuring or judging of thicknesses by lengths, with great preciseness. Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter. 

Nay more, in order to determine so nicely as they often do, it must also at the same time, trace with some skill those delicate windings upon the surface which have been described in page 64 and 65, which altogether may be observed to include the two general ideas mentioned at the beginning of this chapter.

1. Mandate AI use case impact assessment – with some flexibility

1.1. Update the Policy for the responsible use of AI in government (the AI policy): Introduce mandatory AI use case governance actions. This will require agencies to conduct an AI impact assessment for use cases that meet certain criteria.

1.2. Provide agencies with flexibility to integrate the AI impact assessment into their own governance processes, depending on specific agency needs and capacity: For example, agencies could be required to conduct a threshold assessment, using sections 1-3, for all use cases. Alternatively, they may be required to complete a full assessment (sections 4-11) for use cases with elevated risks identified at the threshold assessment stage. For this, agencies could either:

  • Complete the full assessment as a standalone process using the assessment tool documentation, as tested through the pilot.
  • Integrate sections 4-11, in full or in part, into existing governance processes.  

This flexible, hybrid approach may be appropriate for agencies with governance mechanisms that already address some or all the requirements in sections 411. These agencies may prefer to adapt their existing processes to integrate section 411 requirements, rather than completing a separate AI impact assessment that overlaps with or duplicates existing processes.

Off
2. Update and strengthen risk assessment

2.1. Update the threshold risk assessment (section 3): Include more objective questions that guide officials to correctly identify and assess relevant risks. Consult government risk management experts for feedback on proposed updates.  

2.2. Require assessment officers to record pre-mitigation inherent risk level as well as post-mitigation treated risk.  

2.3. Update supporting guidance on risk assessment: Consider including examples and references to any relevant external resources.

Off
3. Clarify scope of legal review section

3.1. Consider options to update legal review step (section 11.1) to specify legal aspects of AI use case that need to be reviewed: This update will include addressing pilot feedback, including suggestions to reframe the legal review step as a series of targeted questions. This could help clarify the scope of the required legal review, focusing on ensuring lawfulness and compliance with relevant legal frameworks, instead of current open-ended question

3.2. Consider how AI governance processes in other Australian and overseas jurisdictions incorporate legal review.  

Off
4. Other assessment tool improvements

4.1. Align assessment tool provisions with proposed AI policy updates and ensure AI policy updates consider relevant pilot feedback, including calls for:

  • further guidance on the definition of AI and the ‘covered use case’ criteria – to be addressed in the AI policy itself
  • further guidance on the timing for an initial assurance assessment and subsequent reassessment.

4.2. Explore options to develop a digital assessment tool, while retaining an ‘offline’ document version for agencies that indicated a preference for this option.

4.3. Address additional pilot feedback in updated assessment tool and guidance documents, while ensuring continued alignment with AI in government and broader AI policy developments. For further detail, see Key findings, above, and Key feedback themes and proposed responses in the Context, data and rationale section of this report.   

Off

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.