• It comes then to this, that no longer than whilst we suppose all the lengths and breadths of the body, or limbs, to be as regular figures as cylinders, or as the leg, figure 68 in plate I, which is as round as a rolling-stone, are the measures of lengths to breadths practicable, or of any use to the knowledge of proportion: so that as all mathematical schemes are foreign to this purpose, we will endeavour to root them quite out of our way: therefore I must not omit taking notice, that Albert Durer, Lamozzo, (see two tasteless figures taken from their books of proportion [Fig. 55. P. I.]) and some others, have not only puzzled mankind with a heap of minute unnecessary divisions, but also with a strange notion that those divisions are governed by the laws of music; which mistake they seem to have been led into, by having seen certain uniform and consonant divisions upon one string produce harmony to the ear, and by persuading themselves, that similar distances in lines belonging to form, would, in like manner, delight the eye

  • Initiative 1

    Some persons have the network so equally wove over the whole body, face and all, that the greatest heat or cold will hardly make them change their colour; and these are seldom seen to blush, tho’ ever so bashful, whilst the texture is so fine in some young women, that they redden, or turn pale, on the least occasion.

    Lead agency: Department of Government

    Off
  • Initiative 2

    I am apt to think the texture of this network is of a very tender kind, subject to damage many ways, but able to recover itself again, especially in youth. The fair fat healthy child of 3 or 4 years old hath it in great perfection; most visible when it is moderately warm, but till that age somewhat imperfect. 

    Lead agency: Department of Government

    Off
  • Initiative 3

    I am apt to think the texture of this network is of a very tender kind, subject to damage many ways, but able to recover itself again, especially in youth. The fair fat healthy child of 3 or 4 years old hath it in great perfection; most visible when it is moderately warm, but till that age somewhat imperfect. 

    Lead agency: Department of Government

    Off
  • Initiative 4

    I am apt to think the texture of this network is of a very tender kind, subject to damage many ways, but able to recover itself again, especially in youth. The fair fat healthy child of 3 or 4 years old hath it in great perfection; most visible when it is moderately warm, but till that age somewhat imperfect. 

    Lead agency: Department of Government

    Off
  • APS AI Plan on a page

  • Document title

    Confusingly similar titles of the National framework for the assurance of AI in government and this pilot Australian Government assurance framework, even though they are very different documents. The pilot framework could be considered an assessment form or tool, with fields for users to populate, rather than a traditional, static ‘framework’ document. It is intended to complement and inform existing governance and assurance processes, rather than provide a standalone, comprehensive ‘assurance’ mechanism.

    Proposed response

    Title will be updated to: Australian Government AI impact assessment tool.

    Off
  • Assessment format

    Some participants felt providing a digital tool for the assessment would be useful, if it helped streamline the process and make assessments more robust through features like business rules and branching questions. Several participants took the initiative to convert the document into a basic Microsoft Forms template for the pilot.  

    However, others preferred the existing Word document format, with a single document providing an overview of the full assessment process, where version history is preserved as the assessment is updated. They felt a document format is more familiar and user-friendly, and queried whether a digital tool would meet record keeping requirements.  

    Proposed response

    Explore options for a digital tool that meets key requirements, including version history tracking, record keeping, accessibility, easy navigation.  

    Off
  • Definition of AI

    Some agencies felt that the current OECD AI definition could capture a range of longstanding rules-based systems that would not traditionally be considered AI.  

    Proposed response

    Consider clarifying advice in the AI policy to distinguish AI from other rules-based systems – e.g. highlighting levels of autonomy and inference as factors. Consider including examples to illustrate AI and non-AI systems.  

    Off
  • ‘Covered use case’ criteria

    Some concern that the current criteria for a covered AI use case are overly broad and would effectively capture all use cases. The criterium capturing any AI use cases that ‘materially influence decision-making’ was highlighted as particularly broad, as it could be interpreted to capture most government activities.

    Proposed response

    Consider amending criteria and moving the ‘covered use case’ criteria into the AI policy. The assessment should include a field to record which criteria apply to the assessed use case. The overly-broad criterium mentioned above could be amended to clarify it is referring to substantive administrative decisions – not inconsequential day-to-day decisions.

    Off
  • Should all existing use cases require assessment – or only new ones?

    Some agencies raised concerns about the burden of retroactively assessing dozens of existing use cases and felt any future mandatory assessment should only apply to new use cases.  

    Proposed response

    The AI policy applies to all AI use by in-scope Australian Government agencies, including existing and new AI use cases. Any future AI use case governance requirements should align with this and likewise apply to all AI use cases. Existing AI use cases should be subject to the same governance and assurance processes as new ones, while providing a transition period to support implementation. A 2-tier approach, exempting certain use cases from scrutiny, would be inconsistent with the government’s commitment to build public trust through robust governance.

    Proposed AI policy updates will specify AI use case level governance processes and establish a timeframe for agencies to review existing use cases (e.g. up to 18 months). The impact assessment and guidance will be updated to align with the policy and include further guidance on best practice for assuring existing use cases.

    Off
  • Timing for the initial assessment

    The pilot version recommends users undertake AI impact assessment ‘as early as possible’, however participants noted it may not be possible to complete an assessment in the early development stages of an AI use case. Related to this, participants queried how impact assessment might interact with budget and procurement processes.  

    Proposed response

    Proposed AI policy updates will recommend that assessment should commence as early as possible in the design stage and the initial assessment should be completed before deployment. While completing the full assessment may not be possible in early stages, agencies should familiarise themselves with the assessment and consider its advice on embedding responsible AI practices into system design.

    Off
  • Timing and nature of regular assessment reviews

    The pilot impact assessment tool recommends a review of the assessment in response to material changes in scope or technology, or when a use case moves between lifecycle stages. However, participants noted reassessment at every lifecycle stage transition may not always be practical or necessary. Participants also queried whether review should involve a complete reassessment, or just sections most affected by the change that triggered review, and is renewed executive sponsor sign-off required every time?  

    Proposed response

    Ensuring assessments are reviewed at regular, planned intervals and/or in response to material changes is essential to good AI governance. Minimum requirements for reassessment intervals will be specified in the AI policy.  

    Assessment and guidance updates could deemphasise lifecycle stage transition as the key consideration and include examples of major milestones or material changes that could trigger review. This will include recommending agencies align with their project management frameworks to decide which transition points would warrant reassessment (e.g. change in scope, contract variation, preproduction check, change advisory board, go-no-go).  

    Off
  • Section 3.1 – Risk assessment

    Feedback indicated the risk assessment process is too subjective and overly dependent on assessing officer's value judgments and background knowledge.  

    Responses to this section differed significantly depending on the assessing officer’s familiarity with the AI risk landscape, with less experienced officers tending to underrate or overlook risks they weren’t aware of. Some users reported challenges using section 3 as a high-level record of post-treatment risks and planned mitigations as it does not specify how to assess inherent risk and select mitigations.

    Proposed response

    Particular attention will be focused on updating this section, drawing on pilot feedback, emerging best practices and expert advice. Updates could include:  

    • adding more objective questions, considering particular risks that may be higher priority or less well understood
    • considering developing a checklist of use case features that would automatically elevate risk above ‘low’
    • adding examples/features illustrating low/medium/high risk use cases to the guidance
    • recommending assessing officers consult experts and peers to review risks as a team
    • adding fields to document inherent, pre mitigation risks
    • further emphasising that the assessment process does not replace agency risk management processes.
    Off
  • Section 3.3 – Executive sponsor endorsement

    Participants reported some confusion on the division of responsibilities for completing the assessment and providing endorsement. For example, should the executive sponsor come from the business area or ICT area; and (how) should the agency’s AI accountable official be involved?

    Pilot participants that submitted assessments to their CIO/equivalent found sign-off relatively straightforward, while those that submitted to the policy/business area SES found it more challenging, requiring additional briefing and time.  

    Proposed response

    Consider more specific guidance for the executive officers signing off on the assessment. Consider providing flexibility for combined sign-off – e.g. could CIO/AO (or their delegate) sign off on technical aspects, while business area to confirm alignment with policy objectives?

    Off
  • Section 11.1 – Legal review

    Pilot participants advised their legal teams struggled with this section and would be reluctant to provide sign-off, which could be a blocker. Some legal teams advised they would need to procure external legal advice.  

    No participants secured formal legal review as part of the pilot due to legal teams’ resource constraints and turnaround times (usually require at least 2-3 weeks and longer for complex requests, especially those requiring external advice).  

    Proposed response

    Updates will clarify purpose of legal review. Consider including more specific guidance on the questions the assessment officer should ask of their legal team.  

    List examples of the types of legal risks to consider – for example, 'If you don't undertake legal review, you might miss/overlook XYZ.' Some pilot participants provided useful detailed feedback on this aspect, including feedback from their legal teams.  

    Off
  • Sections 11.3 and 11.4 external review suggestion

    Regarding the requirement at section 11.3 for internal governance review, participants sought advice on identifying or creating a suitable internal governance review body and suggested including specific guidance defining the scope and objectives for this process.  

    Some agencies questioned the inclusion of an external review body at section 11.4 for high-risk use cases, querying how this would work in practice – while recognising this is only a suggestion and not mandatory.  

    Participants did not indicate a strong desire for the government to establish a new external review body. No high-risk use cases were tested through the pilot, and the volume of future demand for review of high-risk cases remains unclear.  

    Proposed response

    Some aspects of this will be covered in the AI policy update, including advice to reuse an existing governance body where possible. Will consider other jurisdictions' settings and experiences with external review processes. If government decides to establish a central external review body, would need to consider:  

    • membership – including private sector/community/academic experts, how to manage probity and information security concerns
    • governance and processes – who would chair; how would the body make decisions (consensus/majority voting); how frequently would it meet?
    • direct costs – including member fees, travel, other meeting costs
    • other resource implications – agency staffing to support the body.  
    Off
  • Skills and capability considerations

    Some participants sought more guidance to specify the types of roles, expertise and skills they should consult on specific sections – including internally (e.g. risk/assurance teams, CIO, data governance), and/or external expertise (e.g. human rights, cyber experts).  

    In particular, participants felt there must be diversity in team skills, background and experience, and that ideally business areas should lead the assessment but with significant support and input. Participants noted AI adoption presents certain sociotechnological issues different from traditional ICT projects.  

    The pilot assessment tool mentions contributing officers should be ‘sufficiently trained’, however some participants called for more guidance on defining this.  

    Proposed response

    Updates to address this feedback could include:  

    • emphasising recommendation to consult widely during assessment and involve diverse expertise.
    • referring to APS job families, SFIA or other relevant frameworks.

    However, overly prescribing the roles and skills required to complete assessment could make AI adoption more challenging, especially for smaller agencies with limited resources and fewer staff with specific expertise. Will also consider suggestions to commission experts to develop other supporting resources, for example on human rights considerations – dependent on resourcing.  

    Off
  • Governance considerations

    Some agencies perceived the impact assessment to be recommending the establishment of entirely new AI-specific governance processes, and that alignment with existing processes was challenging. Other agencies found integrating with existing governance processes relatively straightforward.

    Proposed response

    Consider bolstering guidance for assessing officers on aligning AI impact assessment with existing internal (and broader APS) governance settings, policies, procedures, oversight mechanisms – while emphasising the assessment does not replace existing obligations.  

    Off
  • Back to the Digital Service Standard

  • Checklist or one-page overview

    Several participants suggested something like a one-page overview could be useful, especially for executive sponsors providing sign-off.

    Proposed response

    Need to consider what this would look like and where it would fit within (or alongside) the current impact assessment. Overly simplifying may lead to a superficial, 'tick box' approach. A digital tool may streamline this – e.g. auto-generating a summary.  

    Off

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.