Lifecycle stage | Statement | Criterion | Required or Recommended |
---|---|---|---|
Whole of AI lifecycle | 01. Define an operational model | Criterion 1: Identify a suitable operational model to design, develop, and deliver the system securely and efficiently. | Recommended |
Whole of AI lifecycle | 01. Define an operational model | Criterion 2: Consider the technology impacts of the operating model. | Recommended |
Whole of AI lifecycle | 01. Define an operational model | Criterion 3: Consider technology hosting strategies. | Recommended |
Whole of AI lifecycle | 02. Define the reference architecture | Criterion 4: Evaluate existing reference architectures. | Required |
Whole of AI lifecycle | 02. Define the reference architecture | Criterion 5: Monitor emerging reference architectures to evaluate and update the AI system. | Recommended |
Whole of AI lifecycle | 03. Identify and build people capabilities | Criterion 6: Identify and assign AI roles to ensure a diverse team of business and technology professionals with specialised skills. | Required |
Whole of AI lifecycle | 03. Identify and build people capabilities | Criterion 7: Build and maintain AI capabilities by undertaking regular training and education of end users, staff, and stakeholders. | Required |
Whole of AI lifecycle | 03. Identify and build people capabilities | Criterion 8: Mitigate staff over-reliance, under-reliance, and aversion of AI. | Recommended |
Whole of AI lifecycle | 04. Enable AI auditing | Criterion 9: Provide end-to-end auditability. | Required |
Whole of AI lifecycle | 04. Enable AI auditing | Criterion 10: Perform ongoing data-specific checks across the AI lifecycle. | Required |
Whole of AI lifecycle | 04. Enable AI auditing | Criterion 11: Perform ongoing model-specific checks across the AI lifecycle. | Required |
Whole of AI lifecycle | 05. Provide explainability based on the use case | Criterion 12: Explain the AI system and technology used, including its limitations and capabilities. | Required |
Whole of AI lifecycle | 05. Provide explainability based on the use case | Criterion 13: Explain outputs made by the AI system to end users. | Recommended |
Whole of AI lifecycle | 05. Provide explainability based on the use case | Criterion 14: Explain how data is used and shared by the AI system. | Recommended |
Whole of AI lifecycle | 06. Manage system bias | Criterion 15: Identify how bias could affect people, processes, data, and technologies involved in the AI system lifecycle. | Required |
Whole of AI lifecycle | 06. Manage system bias | Criterion 16: Assess the impact of bias on your use case. | Required |
Whole of AI lifecycle | 06. Manage system bias | Criterion 17: Manage identified bias across the AI system lifecycle. | Required |
Whole of AI lifecycle | 07. Apply version control practices | Criterion 18: Apply version management practices to the end-to-end development lifecycle. | Required |
Whole of AI lifecycle | 07. Apply version control practices | Criterion 19: Use metadata in version control to distinguish between production and non-production data, models, and code. | Recommended |
Whole of AI lifecycle | 07. Apply version control practices | Criterion 20: Use a version control toolset to improve useability for users. | Recommended |
Whole of AI lifecycle | 07. Apply version control practices | Criterion 21: Record version control information in audit logs. | Recommended |
Whole of AI lifecycle | 08. Apply watermarking techniques | Criterion 22: Apply visual watermarks and metadata to generated media content to provide transparency and provenance, including authorship. | Required |
Whole of AI lifecycle | 08. Apply watermarking techniques | Criterion 23: Apply watermarks that are WCAG compatible where relevant. | Required |
Whole of AI lifecycle | 08. Apply watermarking techniques | Criterion 24: Apply visual and accessible content to indicate when a user is interacting with an AI system. | Required |
Whole of AI lifecycle | 08. Apply watermarking techniques | Criterion 25: For hidden watermarks, use watermarking tools based on the use case and content risk. | Recommended |
Whole of AI lifecycle | 08. Apply watermarking techniques | Criterion 26: Assess watermarking risks and limitations. | Recommended |
Design | 09. Conduct pre-work | Criterion 27: Define the problem to be solved, its context, intended use, and impacted stakeholders. | Required |
Design | 09. Conduct pre-work | Criterion 28: Assess AI and non-AI alternatives. | Required |
Design | 09. Conduct pre-work | Criterion 29: Assess environmental impact and sustainability. | Required |
Design | 09. Conduct pre-work | Criterion 30: Perform cost analysis across all aspects of the AI system. | Required |
Design | 09. Conduct pre-work | Criterion 31: Analyse how the use of AI will impact the solution and its delivery. | Required |
Design | 10. Adopt a human-centred approach | Criterion 32: Identify human values requirements. | Required |
Design | 10. Adopt a human-centred approach | Criterion 33: Establish a mechanism to inform users of AI interactions and output, as part of transparency. | Required |
Design | 10. Adopt a human-centred approach | Criterion 34: Design AI systems to be inclusive, ethical, and meet accessibility standards. using appropriate mechanisms. | Required |
Design | 10. Adopt a human-centred approach | Criterion 35: Design feedback mechanisms. | Required |
Design | 10. Adopt a human-centred approach | Criterion 36: Define human oversight and control mechanisms. | Required |
Design | 10. Adopt a human-centred approach | Criterion 37: Involve users in the design process. | Recommended |
Design | 11. Design safety systemically | Criterion 38: Analyse and assess harms. | Required |
Design | 11. Design safety systemically | Criterion 39: Mitigate harms by embedding mechanisms for prevention, detection, and intervention. | Required |
Design | 11. Design safety systemically | Criterion 40: Design the system to allow calibration at deployment. | Recommended |
Design | 12. Define success criteria | Criterion 41: Identify, assess, and select metrics appropriate to the AI system. | Required |
Design | 12. Define success criteria | Criterion 42: Re-evaluate the selection of appropriate success metrics as the AI system moves through the AI lifecycle. | Recommended |
Design | 12. Define success criteria | Criterion 43: Continuously verify correctness of the metrics. | Recommended |
Data | 13. Establish data supply chain management processes | Criterion 44: Create and collect data for the AI system and identify the purpose for its use. | Required |
Data | 13. Establish data supply chain management processes | Criterion 45: Plan for data archival and destruction. | Required |
Data | 13. Establish data supply chain management processes | Criterion 46: Analyse data for use by mapping the data supply chain and ensuring traceability. | Recommended |
Data | 13. Establish data supply chain management processes | Criterion 47: Implement practices to maintain and reuse data. | Recommended |
Data | 14. Implement data orchestration processes | Criterion 48: Implement processes to enable data access and retrieval, encompassing the sharing, archiving, and deletion of data. | Required |
Data | 14. Implement data orchestration processes | Criterion 49: Establish standard operating procedures for data orchestration. | Recommended |
Data | 14. Implement data orchestration processes | Criterion 50: Configure integration processes to integrate data in increments. | Recommended |
Data | 14. Implement data orchestration processes | Criterion 51: Implement automation processes to orchestrate the reliable flow of data between systems and platforms. | Recommended |
Data | 14. Implement data orchestration processes | Criterion 52: Perform oversight and regular testing of task dependencies. | Recommended |
Data | 14. Implement data orchestration processes | Criterion 53: Establish and maintain data exchange processes. | Recommended |
Data | 15. Implement data transformation and feature engineering practices | Criterion 54: Establish data cleaning procedures to manage any data issues. | Recommended |
Data | 15. Implement data transformation and feature engineering practices | Criterion 55: Define data transformation processes to convert and optimise data for the AI system. | Recommended |
Data | 15. Implement data transformation and feature engineering practices | Criterion 56: Map the points where transformation occurs between datasets and across the AI system. | Recommended |
Data | 15. Implement data transformation and feature engineering practices | Criterion 57: Identify fit-for-purpose feature engineering techniques. | Recommended |
Data | 15. Implement data transformation and feature engineering practices | Criterion 58: Apply consistent data transformation and feature engineering methods to support data reuse and extensibility. | Recommended |
Data | 16. Ensure data quality is acceptable | Criterion 59: Define quality assessment criteria for the data used in the AI system. | Required |
Data | 16. Ensure data quality is acceptable | Criterion 60: Implement data profiling activities and remediate any data quality issues. | Recommended |
Data | 16. Ensure data quality is acceptable | Criterion 61: Define processes for labelling data and managing the quality of data labels. | Recommended |
Data | 17. Validate and select data | Criterion 62: Perform data validation activities to ensure data meets the requirements for the system’s purpose. | Required |
Data | 17. Validate and select data | Criterion 63: Select data for use that is aligned with the purpose of the AI system. | Required |
Data | 18. Enable data fusion, integration and sharing | Criterion 64: Analyse data fusion and integration requirements. | Recommended |
Data | 18. Enable data fusion, integration and sharing | Criterion 65: Establish an approach to data fusion and integration. | Recommended |
Data | 18. Enable data fusion, integration and sharing | Criterion 66: Identify data sharing arrangements and processes to maintain consistency. | Recommended |
Data | 19. Establish the model and context dataset | Criterion 67: Measure how representative the model dataset is. | Required |
Data | 19. Establish the model and context dataset | Criterion 68: Separate the model training dataset from the validation and testing datasets. | Required |
Data | 19. Establish the model and context dataset | Criterion 69: Manage bias in the data. | Required |
Data | 19. Establish the model and context dataset | Criterion 70: For generative AI, build reference or contextual datasets to improve the quality of AI outputs. | Recommended |
Train | 20. Plan the model architecture | Criterion 71: Establish success criteria that cover any AI training and operational limitations for infrastructure and costs. | Required |
Train | 20. Plan the model architecture | Criterion 72: Define a model architecture for the use case suitable to the data and AI system operation. | Required |
Train | 20. Plan the model architecture | Criterion 73: Select algorithms aligned with the purpose of the AI system and the available data. | Required |
Train | 20. Plan the model architecture | Criterion 74: Set training boundaries in relation to any infrastructure, performance, and cost limitations. | Required |
Train | 20. Plan the model architecture | Criterion 75: Start small, scale gradually. | Recommended |
Train | 21. Establish training environment | Criterion 76: Establish compute resources and infrastructure for the training environment. | Required |
Train | 21. Establish training environment | Criterion 77: Secure the infrastructure. | Required |
Train | 21. Establish training environment | Criterion 78: Reuse available approved AI modelling frameworks, libraries, and tools. | Recommended |
Train | 22. Implement model creation, tuning, and grounding | Criterion 79: Set assessment criteria for the AI models, with respect to pre-defined metrics for the AI system. | Required |
Train | 22. Implement model creation, tuning, and grounding | Criterion 80: Identify and address situations when AI outputs should not be provided. | Required |
Train | 22. Implement model creation, tuning, and grounding | Criterion 81: Apply considerations for reusing existing agency models, off-the-shelf, and pre-trained models. | Required |
Train | 22. Implement model creation, tuning, and grounding | Criterion 82: Create or fine-tune models optimised for target domain environment. | Required |
Train | 22. Implement model creation, tuning, and grounding | Criterion 83: Create and train using multiple model architectures and learning strategies. | Recommended |
Train | 23. Validate, assess, and update model | Criterion 84: Set techniques to validate AI trained models. | Required |
Train | 23. Validate, assess, and update model | Criterion 85: Evaluate the model against training boundaries. | Required |
Train | 23. Validate, assess, and update model | Criterion 86: Evaluate the model for bias, implement and test bias mitigations. | Required |
Train | 23. Validate, assess, and update model | Criterion 87: Identify relevant model refinement methods. | Recommended |
Train | 24. Select trained models | Criterion 88: Assess a pool of trained models against acceptance metrics to select a model for the AI system. | Recommended |
Train | 25. Implement continuous improvement frameworks | Criterion 89: Establish interface tools and feedback channels for machines and humans. | Required |
Train | 25. Implement continuous improvement frameworks | Criterion 90: Perform model version control. | Required |
Evaluate | 26. Adapt strategies and practices for AI systems | Criterion 91: Mitigate bias in the testing process. | Required |
Evaluate | 26. Adapt strategies and practices for AI systems | Criterion 92: Define test criteria approaches. | Required |
Evaluate | 26. Adapt strategies and practices for AI systems | Criterion 93: Define how test coverage will be measured. | Recommended |
Evaluate | 26. Adapt strategies and practices for AI systems | Criterion 94: Define a strategy to ensure test adequacy. | Recommended |
Evaluate | 27. Test for specified behaviour | Criterion 95: Undertake human verification of test design and implementation for correctness, consistency, and completeness. | Required |
Evaluate | 27. Test for specified behaviour | Criterion 96: Conduct functional performance testing to verify the correctness of the AI System Under Test (SUT) as per the pre-defined metrics. | Required |
Evaluate | 27. Test for specified behaviour | Criterion 97: Perform controllability testing to verify human oversight and control, and system control requirements. | Required |
Evaluate | 27. Test for specified behaviour | Criterion 98: Perform explainability and transparency testing as per the requirements. | Required |
Evaluate | 27. Test for specified behaviour | Criterion 99: Perform calibration testing as per the requirements. | Required |
Evaluate | 27. Test for specified behaviour | Criterion 100: Perform logging tests as per the requirements. | Required |
Evaluate | 28. Test for safety, robustness, and reliability | Criterion 101: Test the computational performance of the system. | Required |
Evaluate | 28. Test for safety, robustness, and reliability | Criterion 102: Test safety measures through negative testing methods, failure testing, and fault injection. | Required |
Evaluate | 28. Test for safety, robustness, and reliability | Criterion 103: Test reliability of the AI output, through stress testing over an extended period, simulating edge cases, and operating under extreme conditions. | Required |
Evaluate | 28. Test for safety, robustness, and reliability | Criterion 104: Undertake adversarial testing (red team testing), attempting to break security and privacy measures to identify weaknesses. | Recommended |
Evaluate | 29. Test for conformance and compliance | Criterion 105: Verify compliance with relevant policies, frameworks, and legislation. | Required |
Evaluate | 29. Test for conformance and compliance | Criterion 106: Verify conformance against organisation and industry-specific coding standards. | Required |
Evaluate | 29. Test for conformance and compliance | Criterion 107: Perform vulnerability testing to identify any well-known vulnerabilities. | Required |
Evaluate | 30. Test for intended and unintended consequences | Criterion 108: Perform user acceptance testing (UAT) and scenario testing, validating the system with a diversity of end-users in their operating contexts and real-world scenarios. | Required |
Evaluate | 30. Test for intended and unintended consequences | Criterion 109: Perform robust regression testing to mitigate the heightened risk of escaped defects resulting from changes, such as a step change in parameters. | Recommended |
Integrate | 31. Undertake integration planning | Criterion 110: Ensure the AI system meets architecture and operational requirements with the Australian Government Security Authority to Operate (SATO). | Recommended |
Integrate | 31. Undertake integration planning | Criterion 111: Identify suitable tests for integration with the operational environment, systems, and data. | Recommended |
Integrate | 32. Manage integration as a continuous practice | Criterion 112: Apply secure and auditable continuous integration practices for AI systems. | Recommended |
Deploy | 33. Create business continuity plans | Criterion 113: Develop plans to ensure critical systems remain operational during disruptions. | Required |
Deploy | 34. Configure a staging environment | Criterion 114: Ensure the staging environment mirrors the production environment in configurations, libraries, and dependencies for consistency and predictability. | Recommended |
Deploy | 34. Configure a staging environment | Criterion 115: Measure the performance of the AI system in the staging environment against predefined metrics. | Recommended |
Deploy | 34. Configure a staging environment | Criterion 116: Ensure deployment strategies include monitoring for AI specific metrics, such as inference latency and AI output accuracy. | Recommended |
Deploy | 35. Deploy to a production environment | Criterion 117: Apply strategies for phased roll-out. | Required |
Deploy | 35. Deploy to a production environment | Criterion 118: Apply readiness verification, assurance checks and change management practices for the AI system. | Required |
Deploy | 35. Deploy to a production environment | Criterion 119: Apply strategies for limiting service interruptions. | Recommended |
Deploy | 36. Implement rollout and safe rollback mechanisms | Criterion 120: Define a comprehensive rollout and rollback strategy. | Recommended |
Deploy | 36. Implement rollout and safe rollback mechanisms | Criterion 121: Implement load balancing and traffic shifting methods for system rollout. | Recommended |
Deploy | 36. Implement rollout and safe rollback mechanisms | Criterion 122: Conduct regular health checks, readiness, and startup probes to verify stability and performance on the deployment environment. | Recommended |
Deploy | 36. Implement rollout and safe rollback mechanisms | Criterion 123: Implement rollback mechanisms to revert to the last stable version in case of failure. | Recommended |
Monitor | 37. Establish monitoring framework | Criterion 124: Define reporting requirements. | Recommended |
Monitor | 37. Establish monitoring framework | Criterion 125: Define alerting requirements. | Recommended |
Monitor | 37. Establish monitoring framework | Criterion 126: Implement monitoring tools. | Recommended |
Monitor | 37. Establish monitoring framework | Criterion 127: Implement feedback loop to ensure that insights from monitoring are fed back into the development and improvement of the AI system. | Recommended |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 128: Test periodically after deployment and have a clear framework to manage any issues | Required |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 129: Monitor the system as agreed and specified in its operating procedures. | Required |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 130: Monitor performance and AI drift as per pre-defined metrics | Required |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 131: Monitor health of the system and infrastructure | Required |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 132: Monitor safety. | Required |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 133: Monitor reliability metrics and mechanisms. | Required |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 134: Monitor human-machine collaboration. | Required |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 135: Monitor for unintended consequences. | Required |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 136: Monitor transparency and explainability. | Required |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 137: Monitor costs. | Required |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 138: Monitor security. | Required |
Monitor | 38. Undertake ongoing testing and monitoring | Criterion 139: Monitor compliance of the AI system. | Required |
Monitor | 39. Establish incident resolution processes | Criterion 140: Define incident handling processes. | Required |
Monitor | 39. Establish incident resolution processes | Criterion 141: Implement corrective and preventive actions for incidents. | Required |
Decommission | 40. Create a decommissioning plan | Criterion 142: Define the scope of decommissioning activities. | Required |
Decommission | 40. Create a decommissioning plan | Criterion 143: Conduct an impact analysis of decommissioning the target AI system. | Required |
Decommission | 40. Create a decommissioning plan | Criterion 144: Proactively communicate system retirement. | Required |
Decommission | 41. Shut down the AI system | Criterion 145: Retain AI system compliance records. | Required |
Decommission | 41. Shut down the AI system | Criterion 146: Disable computing resources or components specifically dedicated to the AI system. | Required |
Decommission | 41. Shut down the AI system | Criterion 147: Securely decommission or repurpose all computing resources specifically dedicated to the AI system, including individual and shared components. | Required |
Decommission | 42. Finalise documentation and reporting | Criterion 148: Finalise decommissioning information and update organisational documentation. | Required |
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.