-
When and how to apply this criterion
When to apply
Apply Criterion 9 during Beta and Live and consider it during Discovery. Collate metrics and monitor your service with a holistic approach and report your results to build government’s view of its services landscape.
Adhere to Criterion 9 across the Service Design and Delivery Process to promote continuous improvement.
How to apply
Questions for consideration:
- what attributes are currently being measured?
- what do existing results say about the service or opportunity?
- what is the story that the data tells us?
- how have results changed over time?
- what service improvements are necessary?
-
-
Pilot implementation report
-
Findings and recommendations
Introduction
This report outlines the findings from the pilot of the draft Australian Government artificial intelligence (AI) assurance framework (the pilot framework) and supporting guidance, conducted by the Digital Transformation Agency (DTA) from September to November 2024.
The pilot framework contains a draft AI impact assessment tool designed to help Australian Government agencies to identify, assess and manage potential AI use case impacts and risks. This report refers to the pilot framework as the AI impact assessment tool to better reflect the purpose of the document. The pilot involved staff from 21 Australian Government agencies, listed at Appendix A, that volunteered to test the pilot framework’s AI impact assessment tool.
AI systems operate with elevated complexity, speed, and scale which can amplify potential harmful outcomes in ways that existing technology governance frameworks and processes may not fully address. AI system opacity, bias, unpredictability, novelty and rapid evolution can further compound these challenges. Robust governance and assurance processes are essential to ensuring that AI systems are operating as intended, upholding ethical principles and managing risks effectively throughout their lifecycle. By addressing specific AI challenges, these responsible AI practices serve as critical enablers for AI innovation.
In June 2024, the Australian Government and state and territory governments announced the National framework for the assurance of AI in government (the national framework). This provided the first nationally consistent, principles-based approach to assurance of government AI use, aligned with Australia's AI Ethics Principles. The impact assessment process tested in this pilot, provides a tool for agencies to demonstrate their use of AI is consistent with the national framework and the Ethics Principles.
AI accountability and transparency requirements at the agency level for all in-scope agencies was introduced in September 2024, with the Policy for the responsible use of AI in government (the AI policy). However, at the use case level, each agency must decide its own AI governance settings. While aspects of established approaches may be useful, the governance of AI systems remains an emerging discipline, and agencies face challenges navigating complex decisions without the benefit of established expertise or time-tested frameworks.
Inconsistent approaches to AI use case governance can lead to gaps in AI risk identification and mitigation, which can hamper efforts to secure public trust and confidence in government’s safe and responsible adoption of innovative technologies. This in turn may limit government AI adoption, particularly in more complex areas. These are precisely the areas where potential elevated risks, with appropriate mitigations, may be justified by greater potential benefits in terms of improved government services and efficiency. Managing these risks to realise these benefits safely and responsibly requires robust governance and assurance. The impact assessment process tested through this pilot is a key first step to achieving these goals.
The AI impact assessment process tested in this pilot aims to provide agencies with a consistent, structured approach to assessing an AI use case’s alignment with Australia's AI Ethics Principles. The draft impact assessment process comprised 11 sections:
- The first 3 sections ask the assessing officer to document the purpose and expected benefits of the AI use case, along with a high-level assessment of key risks and planned mitigation measures. If all risks are rated low, the assessing officer can seek executive endorsement to conclude the assessment at section 3 and proceed with the use case.
- If any of the section 3 threshold risks are rated medium or high, the assessment should proceed through the remaining sections 4 to 11. These sections require assessing officers to document how they will address key risks and ensure their AI use is safe and responsible.
The supporting guidance mirrors the assessment’s 11 section structure with advice for completing each section. In October 2024, during the pilot period, the DTA published the pilot AI assurance framework and guidance.
Since the pilot concluded, the DTA has published further resources to support agency AI adoption that complement the impact assessment tool – including:
- AI procurement advice, an AI contract template, and model contract clauses for purchasing AI systems
- AI technical standard, which provides practical guidance on best practice for the end-to-end design, development, deployment, and use of AI systems
- Guidance for agencies and staff on using public generative AI tools safely and responsibly.
Key insights
Close to two-thirds of pilot survey respondents said the draft assessment tool helped them identify risks that existing processes would not have captured
Close to 90% considered the guidance helpful or very helpful for completing the assessment.
Around 70% of survey responses reported the assessment questions were clear and easy to understand.
Establishing clear, consistent AI governance practices would help lift confidence in exploring AI innovation.
Agencies remain cautious about adopting AI
Most participants only tested low-risk, less complex AI use cases using the draft assessment tool during the pilot period. This meant many of the assessments concluded at the initial threshold assessment stage (sections 1-3) and did not proceed to the extended assessment (sections 4-11) required for use cases with elevated risk. Most use cases were in the early exploratory stages – only a handful of participants reported assessing in-production use cases as part of the pilot.
Key data constraints included:
- the small pool of participants
- limited number of extended assessments beyond the section 3 threshold assessment
- incomplete survey response set
- short pilot period – participants reported this did not allow enough time for comprehensive assessments, while other urgent priorities diverted agency resources away from this non-mandatory pilot exercise
- divergent feedback on some aspects of the assessment process, reflecting varied experiences and perspectives, which may be challenging to address to the satisfaction of all stakeholders.
While there may be other agencies that did not participate in the pilot that are exploring or already deploying more complex AI use cases, pilot participants reported their agencies were reluctant to pursue higher-risk AI adoption. Participants cited factors including resource constraints and uncertainty around AI-specific governance and risk management processes. Concerns that a misstep could result in unintended harm or expose the agency to reputational damage influence this cautious approach.
Higher-risk adoption could deliver the greatest value, with appropriate mitigations. Addressing this requires an integrated, strategic approach, with impact assessment being just one of the tools required for achieving safe, responsible and successful innovation. To support agencies, the DTA has developed additional resources, such as the AI technical standard, AI procurement advice, model AI contract clauses and an AI contract template, and will update the Policy for the responsible use of AI in government. These efforts aim to build a robust foundation for responsible AI adoption, providing agencies with tools and guidance to navigate the complexities of AI implementation.
Agencies are calling for clear parameters and practical guidance
Close to two-thirds of pilot survey respondents said the draft assessment tool helped them identify risks that existing processes would not have captured. Around half said they found the assessment process useful for ensuring responsible use of AI. Close to 90% considered the guidance helpful or very helpful for completing the assessment. Further insights are provided under Survey data in the Context, data and rationale section of this report.
Pilot participants generally welcomed the draft assessment tool, noting it helped build trust and confidence that AI projects are managing risks and impacts safely and responsibly. Securing this trust and confidence – both internally, with agency staff and leaders, and with relevant external stakeholders – is crucial for the successful rollout of AI solutions.
With no mandatory requirements on AI use case governance in the Australian Government, pilot agencies reported low confidence in AI adoption. Agencies also appeared reluctant to invest resources in complex AI projects without clear criteria to verify and publicly demonstrate their AI use is safe. Publishing an updated impact assessment tool will set the consistent governance expectations needed to support confidence in AI adoption, working in tandem with other DTA resources published since the pilot concluded, including the AI technical standard.
Among the agencies that already adopted AI, it was clear that governance practices were inconsistent and not always comprehensive. This inconsistency may lead to gaps in AI risk identification and management, which could result in unintended negative outcomes that undermine public trust.
Greater flexibility will help meet different agency needs and contexts
Some agencies reported the draft assessment tool complemented and strengthened their existing governance processes. They found integrating it into their operations straightforward.
Others reported that parts of the assessment appeared to duplicate existing agency processes. For example, some larger operational agencies already have extensive governance, risk and assurance processes, supported by dedicated resources. However, even these agencies said the pilot assessment tool helped them identify risks not captured by existing processes.
To accommodate diverse agency needs, a flexible approach to adopting the assessment tool is outlined in Recommendation 1.
Identifying and assessing AI risk remains a challenge
A key challenge pilot participants identified in feedback interviews and survey responses was identifying and assessing AI risk. Strengthening the risk assessment process to include more objective criteria is an area of focus for the next phase of updates.
Of all the sections in the pilot assessment, the initial threshold risk assessment step (section 3.1) received the most comments and suggestions for improvement, both in the feedback interviews and survey responses. When asked if any sections were particularly challenging, 43% of survey responses referenced the risk assessment.
Section 3.1 asks assessing officers to provide risk ratings in response to a series of open-ended questions, requiring consideration of a wide range of potential use case impacts – some of which are abstract or indirect – including social, ethical, legal, and reputational risks. The pilot draft instructs assessing officers to record risk ratings ‘accounting for any risk mitigations and treatments’, rather than simply assessing inherent, pre-mitigation risk levels. This requirement can complicate the assessment process, increasing the likelihood of subjective or inconsistent ratings, as officers must assess both the risks and the effectiveness of any treatments applied.
Participants noted that officers completing the assessment ‘don’t know what they don’t know’ and at times were not aware of the ways AI could introduce new risks or amplify existing risks of harm. This highlights the importance of involving colleagues with diverse expertise in assessments to ensure potential risks are identified and assessed accurately and consistently.
Pilot participants with experience in fields such as risk management, data governance and ICT generally understood the rationale behind the risk questions. However, those with less exposure to these topics found it more challenging to interpret the questions and apply them to their AI use case. In general, participants called for more guidance to support risk identification, assessment and mitigation. Recommendation 2 outlines an approach to address this feedback, including clarifying the risk assessment questions, adding more explanatory guidance and focusing on inherent risk.
Securing legal review was another major hurdle
Participants who conducted extended assessments, beyond the section 3 threshold assessment, reported significant challenges completing the legal review at section 11.1. Participants reported that their legal teams sought greater clarity on the specific legal aspects of the AI use case to sign off, highlighting the importance of clearly defining the scope and purpose of each section.
Legal teams usually require at least several weeks to provide advice, and even longer for complex matters. Some participants also reported their internal legal teams would need to procure external legal advice to complete this section. The updated assessment tool will seek to address these concerns and provide effective consideration of legal aspects of each AI use case, as outlined in Recommendation 3.
Other updates will help to clarify and streamline aspects of the assessment
In addition to the key insights above, pilot participants provided valuable insights and practical suggestions to improve the assessment tool, summarised under Key feedback themes and proposed responses in the Context, data and rationale section of this report. The DTA will consult other relevant experts and consider other developments in the AI policy landscape to inform further updates. This is addressed in Recommendation 4.
-
Pilot implementation report
-
Context, data and rationale
Pilot AI assurance framework background
The pilot framework’s AI impact assessment process and supporting guidance were developed by the AI in Government Taskforce, which operated from September 2023 to June 2024. The taskforce was co-led by the DTA and the Department of Industry, Science and Resources (DISR) and staffed by secondees from 12 Australian Public Service (APS) agencies. The drafting process involved several rounds of consultation, with interested agencies providing feedback that informed further refinements. After the taskforce concluded, the DTA resumed responsibility for the AI in government agenda.
This included implementing the Policy for the responsible use of AI in government in September 2024, which introduced mandatory agency level AI accountability and transparency requirements. The policy also recommends agencies make sure staff have the skills to responsibly engage with AI. The DTA has also developed an online training module on AI in government fundamentals to support this.
The AI impact assessment process tested in this pilot is designed to assess an AI use case’s alignment with Australia's AI Ethics Principles. For instance, the assessment asks officials to explain how they will:
- define and measure fairness
- uphold privacy and human rights obligations
- incorporate diversity
- ensure transparency, explainability and contestability
- designate accountability
- demonstrate reliability and safety, including through data governance, testing, monitoring and human intervention mechanisms.
Pilot objectives
The below summary outlines how the pilot addressed its 5 objectives.
1. To test whether the framework meets its intent of placing the government as an exemplar in the responsible use of AI by assisting agencies to identify risk and apply appropriate mitigations to specific use cases.
More than half of the survey responses rated the framework assessment process useful for ensuring responsible use of AI and the same proportion reported the pilot framework helped them manage and mitigate risks that existing agency processes would not have.
However, participants also found the risk assessment process in the assessment challenging and provided feedback on other areas for improvement. This report's recommendations to update the assessment, including the risk assessment process, will further address this pilot objective.
2. To stress test, gather feedback and refine the framework (e.g. considering clarity of language, time taken, ease of use).
Around 70% of survey responses reported the assessment questions were clear and easy to understand. At the same time, the pilot's stress testing revealed a number of areas for improvement, including the risk assessment and legal review sections.
3. To identify any gaps that need to be addressed.
Key gaps participants raised related to the risk assessment and legal review sections. These and other areas for improvement identified in the pilot data will be addressed through the proposed updates outlined in the Recommendations section and below under Key feedback themes and proposed responses.
4. To gather evidence to support implementation considerations such as:
- making the framework mandatory
- treatment of existing use cases
- the need for a central oversight mechanism for high-risk use cases, and the potential cost/resourcing implications of establishing such a mechanism.
Pilot participants noted that setting clear and consistent AI governance requirements would enable greater AI adoption and help build public trust. Recommendation 1 outlines a proposed approach to mandating AI use case impact assessment, with some flexibility to support implementation in diverse agency contexts.
While some participants raised concerns that requiring assessment of all existing use cases may be burdensome, the DTA considers that applying minimum AI use case governance requirements to all AI use is desirable. See Key feedback themes and proposed responses below for further details on plans to address this.
The pilot did not provide strong evidence to support a central oversight mechanism for high-risk use cases, with only 2 reported high-risk assessments. However, should the volume of higher-risk AI use cases increase in future, as agencies build AI confidence and capability, revisiting this question may be warranted.
5. To raise awareness of the framework and the Policy for the responsible use of AI in government across the APS.
While it is difficult to measure the extent to which the pilot raised awareness of the assessment and policy, a number of agencies contacted the DTA after the pilot commenced asking to join, with 4 agencies ultimately joining the pilot in October. Participants indicated they and their executive were eager to adopt recognised methods to demonstrate their current or planned AI use upheld the government's commitment to exemplary safe and responsible AI adoption.
Pilot agencies
The 21 pilot agencies, listed at Appendix A, included a range of agency types, including very large operational agencies, medium and large policy agencies and smaller, specialised entities. Some pilot agencies had not yet implemented any AI tools and were only in the early exploration and experimentation stages of adoption, while others had longstanding AI use cases in production.
These diverse agency perspectives have provided valuable insights into how the AI impact assessment can be applied in different operational contexts. These insights have informed options to further streamline, clarify and strengthen the assessment to support broad, consistent implementation.
-
-
-
Methodology
- 21 pilot agencies
- 16 agencies submitted surveys
The pilot gathered qualitative feedback on the AI impact assessment process through midpoint interviews in October 2024 and post-pilot interviews held from late November 2024 to January 2025. Participants were also asked to complete a feedback survey following each use case assessment to provide quantitative and qualitative data (survey questions provided at Appendix B). Some pilot agencies also shared their use case assessment documentation.
The pilot was concerned with the process of completing the AI impact assessment, rather than the content of the assessments. The pilot sought feedback on the practicality and useability of the draft impact assessment and supporting guidance, including any aspects that were more challenging, that could be improved or that were missing.
Table 1: Reported assessments Assessment category Number Total number of reported assessments conducted during pilot 43 - Threshold assessments (sections 1-3)
22 - Extended assessments (some or all of sections 4-11)
14 - Unknown (agency did not advise assessment type)
7 Participants reported they assessed 43 use cases in total through the draft AI impact assessment process, comprising:
- 22 low-risk, lower-complexity use cases that concluded at the initial threshold assessment (sections 1-3) and did not require an extended assessment
- 14 use cases that undertook extended assessment beyond section 3. This report refers to these as ‘extended assessments’ as, technically, none completed the ‘full assessment’ process
- 7 use cases for which agencies did not specify the type of assessment (threshold assessment or extended assessment)
- 2 use cases with a high-risk rating.
None of the 14 extended assessments completed every section of the assessment process during the pilot period. None of the extended assessments completed the legal review section (section 11.1) and only a handful underwent the formal internal governance review (section 11.3).
Table 2: Survey submissions Survey category Number Total number of post-assessment surveys submitted 23 - Threshold assessment surveys
13 - Extended assessment surveys
10 The pilot assessment tool only required full assessment for use cases rated medium or high-risk at the section 3 threshold assessment. Some participants chose to conduct an extended assessment for use cases rated low-risk as an exercise, so the 14 extended assessments conducted as part of the pilot include a mix of low, medium and high-risk use cases.
Table 3: Distribution of reported assessments Number of assessments reported Number of agencies 0 1 1 10 2 5 3 3 5 1 8 1 Total 21 The DTA asked pilot participants to submit a separate survey for each use case assessment to capture any differences in the assessment process for different types of AI use cases. In total, 16 pilot agencies submitted 23 post-assessment surveys.
- Some agencies provided separate surveys for each assessment, as requested.
- Some agencies provided a single survey response with consolidated feedback covering multiple similar assessments, rather than submitting multiple surveys with similar answers.
- All respondents completed the same survey. However, it is important to note that the survey responses from participants who only completed the threshold assessment (sections 1-3) do not encompass the entire assessment process required for use cases with elevated risk.
- Some agencies reported the number of assessments but did not submit a survey.
Some agencies also submitted their draft assessment documentation, which provided valuable insights into how the pilot impact assessment was applied in different contexts and for different use case types. Sharing the assessment documentation was not a requirement of the pilot.
Limitations
A number of limiting factors make drawing strong conclusion from pilot data challenging, including the:
- relatively short pilot period
- small pool of participants
- low numbers of completed extended assessments
- incomplete survey response set (5 of 21 agencies did not submit a survey)
- divergent feedback among participants.
Participants with low-risk use cases, which only required an initial threshold assessment (sections 1-3) were able to complete all questions, including securing executive endorsement. This yielded useful feedback on different experiences with the executive endorsement process and suggestions to improve it.
However, none of the use cases that proceeded to extended assessment were able to complete all the assessment steps in the pilot timeframe. None of these extended assessments conducted during the pilot secured legal review or internal governance review body approval (section 11). This means that feedback on these aspects of the assessment is primarily based on desktop review rather than practical application.
Another aspect of the assessment process that was not tested was the requirement to reassess use cases in response to material changes, such as transition between AI lifecycle stages or major change in scope, usage or operation. Due to the limited pilot period, none of the use cases assessed during the pilot underwent changes that would trigger reassessment.
Possible reasons for the low number of use cases, limited number of higher-risk or complex use cases, and lack of completed extended assessments include:
- the short pilot period, initially planned for 2 months and extended to 3 months. AI use case development and approval processes often span many more months
- limited agency resources for non-mandatory pilot exercise, overtaken by higherpriority tasks. It’s possible that some pilot agencies had other AI use cases they could have used to test the impact assessment process but were unable to do so due to resource or other constraints.
- low levels of familiarity with and understanding of AI-specific risks, meaning use cases that should have been identified as posing elevated risk were identified as lowrisk and were therefore not put through the extended assessment
- few pilot agencies with actual AI use cases in production.
Most of the participants reported use cases in the early exploratory stages, with a focus on lower-risk, less complex use cases that do not use or produce sensitive data. This suggests agencies remain cautious about AI and are taking a measured approach to its adoption. Starting with simpler use cases may help gradually build AI confidence and capability and secure leadership support. There may be other agencies exploring or already deploying more complex AI use cases however they did not participate in the pilot.
-
Survey data
Nearly 60% of the 23 survey responses rated the impact assessment process useful for ensuring responsible use of AI, while 35% gave the process a neutral rating and only 2 responses found it not useful (Chart 1). Nearly 70% of survey responses reported the assessment questions were clear or very clear and easy to understand (Chart 2).
Chart 1: How useful did you find this assessment process for ensuring responsible use of AI? (n=23)
Chart 2: How clear and easy to understand were the questions in the framework? (n=23)
Nearly 90% considered the guidance helpful or very helpful for completing the assessment. None of the responses rated the guidance unhelpful (Chart 3). Just over half of the surveys reported their assessment involved 1 to 4 staff (Chart 4). These were mostly lower-risk, less complex use cases that only required a threshold assessment.
Chart 3: How helpful was the guidance for completing the framework? (n=23)
Chart 4: Approximately how many people were involved in completing this assessment? (n=23)
One survey response listed over 20 internal agency officials that contributed to the use case assessment, including privacy, legal, fraud, cyber and several specialist ICT teams, as well as senior executives and the third-party software provider.
This same response reported the assessment took over 20 working days to complete for this particular use case. However, this was an outlier, as nearly 90% of surveys reported completing the framework document took up to 5 working days (Chart 5).
Half of the surveys did not clearly specify how long the overall assessment process took. The survey design, with a single free-text field for both responses, may have contributed to this ambiguity. In some cases, it appears that ‘completing the framework document’ also completed the ‘overall assessment process’, especially if this was only an initial threshold assessment for a low-risk use case, and not an extended assessment. These responses reported finding the initial threshold assessment process straightforward, taking less than 5 working days to complete end-to-end.
Chart 5: How long do you estimate it took to complete a) the framework document and b) the overall assessment process? (n=23)
Just under two-thirds of surveys reported the draft impact assessment process helped identify and assess risks that existing processes would not have captured (Chart 6).
Chart 6: Did the Framework help you identify and assess any risks that existing processes would not have captured? (n=23)
Table 4: Post-assessment survey – all yes/no questions (n=23) # Question Yes No N/A Not stated Q5 Did any of the delegates request further information before approving the assessment? 50% 23% 27% 0% Q6 Did you need to consult any specialist expertise to complete the assessment? 55% 27% 18% 0% Q7 Did the Framework help you identify and assess any risks that existing processes would not have captured? 68% 32% 0% 0% Q8 Did the Framework help you manage and mitigate any risks that existing processes would not have? 50% 41% 5% 5% Q9 Did completing this assessment lead to any changes in your AI project or use case? 41% 50% 9% 0% Q10 Did you encounter any usability issues with the Framework document itself? 41% 50% 0% 9% Q16 Was your agency's existing governance structure sufficient to oversee this AI use case? 68% 18% 0% 14% Other considerations
In addition to the pilot feedback, the DTA will consider other relevant developments in the AI policy landscape to inform updates to the impact assessment process and ensure continued alignment. These include:
- insights arising from DTA’s development of AI technical standards and broader updates to the AI in government policy
- DISR’s whole-of-economy safe and responsible AI work including:
- Voluntary AI Safety Standard (published September 2024)
- proposals for mandatory guardrails (September 2024)
- AI Impact Navigator (October 2024)
- the APS Data Ethics Framework (December 2024)
- the Attorney-General’s Department’s pending automated decision-making reforms
- the Australian National Audit Office report on governance of AI at the Australian Taxation Office (February 2025)
- recent parliamentary inquiry reports, including:
- Senate Select Committee on Adopting AI (November 2024)
- Joint Committee of Public Accounts and Audit Inquiry into the use and governance of AI systems by public sector entities - 'Proceed with Caution' (February 2025)
- state and territory government AI policy developments
- international developments – including national and multilateral government initiatives
- emerging research on AI safety and assurance.
-
Pilot implementation report
-
Appendix A
Pilot agencies
In total, 21 agencies completed the pilot (one agency that initially expressed interest in joining ultimately withdrew from the pilot).
- Attorney-General’s Department
- Australian Bureau of Statistics
- Australian Institute of Family Studies
- Australian Taxation Office
- Clean Energy Regulator
- Department of Agriculture, Fisheries and Forestry
- Department of Climate Change, Energy, the Environment and Water
- Department of Employment and Workplace Relations
- Department of Finance
- Department of Foreign Affairs and Trade
- Department of Health and Aged Care
- Department of Home Affairs
- Department of Industry, Science and Resources
- Department of Parliamentary Services
- Digital Transformation Agency
- eSafety Commissioner
- Fair Work Commission
- IP Australia
- Murray Darling Basin Authority
- National Health and Medical Research Council
- Services Australia
-
Pilot implementation report
-
Appendix B
Post-assessment survey questions
Assessment Details
Name of AI Use Case
Reference Number
Lead Agency
Assessment Contact Officer Name
Assessment Contact Officer Email
The Framework
- On a scale of 1-5, how useful did you find this assessment process for ensuring responsible use of AI? (1 being not useful at all, 5 being very useful)
Please include a brief explanation of your score. (Optional) - On a scale of 1-5, how clear and easy to understand were the questions in the Framework? (1 being very unclear, 5 being very clear)
Please include a brief explanation of your score. (Optional) - What sections or questions, if any, did you find particularly challenging to complete? Why?
- Approximately how many people were involved in completing this assessment? What are their roles within your agency (for example, project officer, decision maker, procurement officer etc.)?
- Did any of the delegates request further information before approving the assessment? If yes, please briefly describe.
- Did you need to consult any specialist expertise to complete the assessment? If so, what kind and why?
- Did the Framework help you identify and assess any risks that existing processes would not have captured? If yes, please briefly describe.
- Did the Framework help you manage and mitigate any risks that existing processes would not have? If yes, please briefly describe.
- Did completing this assessment lead to any changes in your AI project or use case? If yes, please briefly explain.
- Did you encounter any usability issues with the Framework document itself?
- How long do you estimate it took to complete a) the Framework document, and b) the overall assessment process? (i.e. hours, days, weeks)
- Do you have any suggestions for improving the Framework or assessment process?
The Guidance - On a scale of 1-5, how helpful was the guidance for completing the framework? (1 being very unhelpful, 5 being very helpful)
Please include a general explanation of your score. (Optional) - What additional guidance or resources would have been helpful in completing this assessment?
Governance
- What is your agency's governance structure for the oversight of this AI use case?
- Was your agency's existing governance structure sufficient to oversee this AI use case? If yes, please briefly explain why? If not, what did you change to ensure it is?
Anything else? - Any other comments or feedback in relation to the Framework, Guidance or governance structures.
- On a scale of 1-5, how useful did you find this assessment process for ensuring responsible use of AI? (1 being not useful at all, 5 being very useful)
-
Secretaries’ Digital and Data Committee communique
Date: 25 September 2025
Supporting successful project delivery
Secretaries Digital and Data Committee (SDDC) members discussed the challenges and opportunities for supporting successful digital project delivery and managing government’s digital estate.
Members discussed the growing challenges of legacy systems and an expanding digital estate, noting rising software and labour costs and an increase in sustainment projects. They explored the value of system-wide solutions, including promoting reuse, and managing legacy debt, while also highlighting the increasing number of projects under assurance oversight.
Digital Investment Decisions
Secretaries noted findings from the Digital Transformation Agency’s (DTA) analysis of digital investment decisions over the 2022–23 to 2025–26 Budgets.
Data and Digital Government Strategy Implementation Plan for 2025
Members requested minor changes to the Data and Digital Government Strategy Implementation Plan for 2025 and agreed to endorse the plan with changes out of session.
Government AI Landscape Update
The Committee noted the update on Australian Public Service (APS) Cloud capabilities and acknowledged the support being provided by the DTA to strengthen whole-of-government capability. The Committee endorsed the DTA’s draft Cloud Policy, which aims to drive cloud adoption across the APS.
January to June 2025 SDDC Performance Report
Members endorsed the SDDC performance report for the period 1 January – 30 June 2025.
SDDC Annual Terms of Reference Review
Members agreed to the updated Terms of Reference for the SDDC.
Subcommittee Performance Reporting January – June 2025: Digital Leadership Committee
Members endorsed the DLC performance report for the period 1 January – 30 June 2025.
Subcommittee Performance Reporting January – June 2025: Deputy Secretaries Data Group (DSDG)
Members endorsed the DSDG performance report for the period 1 January – 30 June 2025.
-
Your responsibilities
To successfully meet this criterion, you need to:
- establish a baseline for your service
- identify the right performance indicators
- measure, report and improve according to strategies
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.