-
Image description
Diagram headline: 'Change in delivery confidence ratings over 12 months'.
The diagram demonstrates the variation between delivery confidence ratings over the 12-month period from February 2024 to February 2025. It is stated that transparency and understanding of project performance is increasing.
In February 2024, 3 projects had high delivery confidence, 12 had medium-high, 7 had medium, 3 had medium-low and 23 projects did not have delivery confidence ratings available.
In February 2025, 8 projects had high delivery confidence, 30 had medium-high, 14 had medium, 7 had medium-low, 2 had low and 1 project had an unavailable DCA as it experienced a delayed commencement, with a DCA to be conducted shortly.
Off -
-
-
80.3% independent assessments
In this report, 80.3% of assessments were completed by independent assurers under the Assurance Framework, with 90.0% for Tier 1 projects meeting this standard. This independence is key to ensuring often complex and challenging digital projects receive the expert, objective advice they need to succeed.
Reforms supporting success – bringing objectivity and rigour to assessing delivery confidence
Delivery confidence assessments are vital for directing effort and support to where it is most needed to ensure the success of all the Australian Government’s digital projects. Therefore, these assessments must be objective and rigorous.
In 2024, the University of Sydney’s John Grill Institute of Project Leadership worked in collaboration with the DTA to prepare best practice guidance on assessing the delivery confidence of digital projects. This guidance identifies the factors that are most significant in the success and failure of digital projects, and sets out how they should be considered when forming an assessment.
This section sets out how digital projects are performing. Digital projects present unique challenges and the reforms set out in previous sections are playing a key role in ensuring the conditions exist for each and every project included in this report to succeed.
-
Disclaimer
“Certain numbers in this report have been rounded to one decimal place. Due to rounding, some totals may not correspond with the sum of the separate figures.”
-
How the Australian Government’s digital projects are performing
-
Improving transparency for Australians on the performance of digital projects
-
Australians now have unprecedented transparency into the performance of the government’s digital projects
Work to improve assurance of digital projects is ensuring reliable assessments of delivery confidence are regularly undertaken. These assessments show most projects are on track to deliver expected outcomes on budget and on schedule.
-
Criterion 2 – Know your user
-
Image description
There are three bar graphs in the image.
- Image 1: Diagram header: 'Tier 1 and 2 projects including an assessment of delivery confidence'. The diagram indicates that in 2024, 52.1% of projects included an assessment of delivery confidence, in comparison to 98.4% in 2025
- Image 2: Diagram header: 'Independent delivery confidence ratings'. The diagram indicates that in 2025, 80.3% of projects included independent delivery confidence ratings (no comparison data for 2024 is provided)
- Image 3: Diagram header: Projects reporting Medium-High or above delivery confidence'. The diagram indicates that in 2024, 31.3% of projects reported Medium-High or above delivery confidence, in comparison with 61.3% in 2025.
-
-
-
Information management for records created using AI technologies
Guidance on identifying and managing records created by, or relating to, AI technologies employed by Australian Government agencies.
These materials are hosted on the National Archives of Australia website.
-
Disclaimer
“Certain numbers in this report have been rounded to one decimal place. Due to rounding, some totals may not correspond with the sum of the separate figures.”
-
Disclaimer
“Certain numbers in this report have been rounded to one decimal place. Due to rounding, some totals may not correspond with the sum of the separate figures.”
-
Image description
The diagram indicates that a total of 29 projects entered assurance oversight in February 2024, with a total budget figure of $7.1 billion.
High delivery confidence – 5 projects with a total budget $0.3 billion.
Medium delivery confidence - 17 projects with a total budget $5.6 billion.
Medium delivery confidence - 5 projects with a total budget $0.6 billion.
Medium-Low delivery confidence - 2 projects with a total budget $0.6 billion.
Off -
-
-
Understanding overall changes in delivery confidence to target engagement and reforms
Most (75.9%) of the 29 Tier 1 and 2 projects entering oversight since February 2024 report a High or Medium-High delivery confidence. These projects commonly report factors contributing to their delivery confidence rating at the start as: establishing effective governance early; having well-prepared documentation and artefacts; and ensuring experienced and capable personnel were ready.
This is an early sign that investment to strengthen digital project design processes is increasing overall delivery confidence. Projects often start with lower levels of delivery confidence, but the recent emphasis on ensuring mature planning is in place before projects start appears to be paying dividends, with more than three-quarters of these new projects entering oversight reporting High or Medium-High confidence. This contrasts with the United Kingdom where ‘it is not unusual for projects to be rated as Red earlier in their lifecycle, when scope, benefits, costs and delivery methods are still being explored’ (Infrastructure and Projects Authority 2024 p.13).
Reforms supporting success – partnering with industry to deliver digital projects
Recognising the crucial role of technology vendors in delivering the Australian Government’s ambitions for digital transformation, the Digital and ICT Investment Oversight Framework includes ‘sourcing’ as an area of focus. As part of this, the DTA coordinates marketplaces and agreements designed to enable agencies to easily access technology goods and services to support their digital projects. In 2023–24, the Australian Government sourced more than $6.4 billion of digital products and services from industry via these marketplaces and agreements. By accessing these arrangements through the BuyICT platform, agencies benefited from the Australian Government’s collective buying power and strengthened terms and conditions.
The DTA’s latest ICT labour hire and professional services panel, the Digital Marketplace Panel 2, adopts the APS Career Pathfinder dataset and Skills Framework for the Information Age (SFIA) to classify ICT labour hire opportunities. The classification of roles and greater panel pricing transparency provides clearer signals for in-demand skills, their costs and potential shortages that will inform delivery capacity and confidence in digital projects. The top in-demand digital and ICT skills sourced by the APS include software engineer, solution architect and business analyst.
-
Summary of requirements in the standard
The statements and criteria of this standard are organised by stage of the AI lifecycle, including those that apply across all lifecycle stages.
Lifecycle stage: Across all lifecycle stages
Statement Number 1. Define an operational model
Recommended
- Identify a suitable operational model to design, develop and deliver the system securely and efficiently.
- Consider the technology impacts of the operating model.
Statement Number 2. Define the reference architecture
Recommended
- Evaluate existing reference architectures.
- Monitor emerging reference architectures to evaluate and update the AI system.
Statement Number 3. Identify and build people capabilities
Recommended
- Identify and assign AI roles to ensure a diverse team of professionals with specialised skills.
- Build and maintain AI capabilities by undertaking regular training and education of staff and stakeholders.
- Mitigate staff overreliance or misuse of AI by conducting regular reviews and audits.
Statement Number 4. Enable AI auditing
Required
- Perform model-specific audits.
Recommended
- Develop auditable AI system.
Statement Number 5. Provide explainability based on the use case
Required
- Explain the AI technology used, including the limitations and capabilities of the system.
Recommended
- Explain predictions and decisions made by the AI system
- Explain data usage and sharing.
- Explain the AI model.
Statement Number 6. Manage system bias
Required
- Identify sources of bias.
- Assess identified bias.
- Manage identified bias across the AI system lifecycle.
Statement Number 7. Apply version control practices
Required
- Apply version management practices to the end-to-end development lifecycle.
Recommended
- Use metadata in version control to distinguish between production and non-production data, models and code.
- Use a version control toolset to improve useability for users.
- Record version control information in audit logs.
Statement Number 8. Apply watermarking techniques
Required
- Apply watermarks to media content that is generated to acknowledge provenance and provide transparency.
- Apply watermarks that are WCAG compatible where relevant.
Recommended
- Use watermarking tools based on the use case and content risk.
- Assess watermarking risks and limitations.
-
Whole of AI lifecycle
Statement Number 1. Define an operational model
Recommended
- Criterion 1: Identify a suitable operational model to design, develop, and deliver the system securely and efficiently.
- Criterion 2: Consider the technology impacts of the operating model.
- Criterion 3: Consider technology hosting strategies.
Statement Number 2. Define the reference architecture
Required
- Criterion 4: Evaluate existing reference architectures.
Recommended
- Criterion 5: Monitor emerging reference architectures to evaluate and update the AI system.
Statement Number 3. Identify and build people capabilities
Required
- Criterion 6: Identify and assign AI roles to ensure a diverse team of business and technology professionals with specialised skills.
- Criterion 7: Build and maintain AI capabilities by undertaking regular training and education of end users, staff, and stakeholders.
Recommended
- Criterion 8: Mitigate staff over reliance, under reliance, and aversion of AI.
Statement Number 4. Enable AI auditing
Required
- Criterion 9: Provide end-to-end auditability.
- Criterion 10: Perform ongoing data-specific checks across the AI lifecycle.
- Criterion 11: Perform ongoing model-specific checks across the AI lifecycle.
Statement Number 5. Provide explainability based on the use case
Required
- Criterion 12: Explain the AI system and technology used, including the limitations and capabilities of the system.
Recommended
- Criterion 13: Explain outputs made by the AI system to end users.
- Criterion 14: Explain how data is used and shared by the AI system.
Statement Number 6. Manage system bias
Required
- Criterion 15: Identify how bias could affect people, processes, data, and technologies involved in the AI system lifecycle.
- Criterion 16: Assess the impact of bias on your use case.
- Criterion 17: Manage identified bias across the AI system lifecycle.
Statement Number 7. Apply version control practices
Required
- Criterion 18: Apply version management practices to the end-to-end development lifecycle.
Recommended
- Criterion 19: Use metadata in version control to distinguish between production and non-production data, models, and code.
- Criterion 20: Use a version control toolset to improve useability for users.
- Criterion 21: Record version control information in audit logs.
Statement Number 8. Apply watermarking techniques
Required
- Criterion 22: Apply visual watermarks and metadata to generated media content to provide transparency and provenance, including authorship.
- Criterion 23: Apply watermarks that are WCAG compatible where relevant.
- Criterion 24: Apply visual and accessible content to indicate when a user is interacting with an AI system.
Recommended
- Criterion 25: For hidden watermarks, use watermarking tools based on the use case and content risk.
- Criterion 26: Assess watermarking risks and limitations.
-
-
-
Design
Statement Number 9. Conduct pre-work
Required
- Criterion 27: Define the problem to be solved, its context, intended use, and impacted stakeholders.
- Criterion 28: Assess AI and non-AI alternatives.
- Criterion 29: Assess environmental impact and sustainability.
- Criterion 30: Perform cost analysis across all aspects of the AI system.
- Criterion 31: Analyse how the use of AI will impact the solution and its delivery.
Statement Number 10. Adopt a human-centred approach
Required
- Criterion 32: Identify human values requirements.
- Criterion 33: Establish a mechanism to inform users of AI interactions and output, as part of transparency.
- Criterion 34: Design AI systems to be inclusive, ethical, and meets accessibility standards using appropriate mechanisms.
- Criterion 35: Design feedback mechanisms.
- Criterion 36: Define human oversight and control mechanisms.
Recommended
- Criterion 37: Involve users in the design process.
Statement Number 11. Design safety systemically
Required
- Criterion 38: Analyse and assess harms.
- Criterion 39: Mitigate harms by embedding mechanisms for prevention, detection, and intervention.
Recommended
- Criterion 40: Design the system to allow calibration at deployment.
Statement Number 12. Define success criteria
Required
- Criterion 41: Identify, assess, and select metrics appropriate to the AI system.
Recommended
- Criterion 42: Reevaluate the selection of appropriate success metrics as the AI system moves through the AI lifecycle.
- Criterion 43: Continuously verify correctness of the metrics.
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.