Appendix 4: Mapping of dimensions to the Technical standard for government’s use of artificial intelligence

To assist implementation teams with alignment questions and to provide greater technical depth for each implementation dimension, we have provided a mapping to the Technical standard for government's use of artificial intelligence below:

Mapping of implementation dimensions to the AI Technical standard
DimensionRelevant AI Technical standard statementsPractical alignment notes
Business & strategic alignmentStatement 9: Conduct pre-work (design), Statement 12: Define success criteria (design), Whole-of-lifecycle Statement 1: Operational modelProblem framing, define measurable outcomes, operational model governance.
Architecture & solution designStatement 2: Reference architecture (whole-of-lifecycle), Statement 10: Human-centred design, Statement 11: Design safety, Statements 7 & 8: Version control & watermarkingAlign to reference architecture, embed design safety, maintain traceability.
Data & integrationStatements 13–19: Data supply chain, orchestration, quality, fusion/integration, establish context datasets; Whole-of-lifecycle Statement 6: Manage biasEnsure data quality, governance, sovereignty, integration and bias controls.
Technology & toolsStatements 20–25: Training & modelling, Whole-of-lifecycle Statements 2, 4, 7, 8 (Reference architecture, auditing, version control, watermarking)Ensure tooling supports traceability, auditing, secure model training.
People & skillsStatement 3: Build people capability (whole-of-lifecycle), Statement 10: Human-centred design, Statement 1: Operational model roles, Statement 4: Auditing capabilityDevelop AI skills, define roles, embed co-design, build auditing capacity.
Governance & riskStatement 1: Operational model, Statement 4: Auditing, Statement 5: Explainability, Statement 6: Manage bias, Statement 11: Design safety, Statement 9: Pre-workDefine governance structures, enable auditing, embed explainability and bias management.
Experimentation & validationStatement 11: Design safety, Statement 12: Success criteria, Statements 22–25: Evaluation & continuous improvement, Statements 4 & 5: Auditing & explainabilityEmbed success thresholds, user validation, auditing, continuous learning.
Delivery & operationsOperate phase: Integrate, deploy, monitor; Whole-of-lifecycle Statements 4, 5, 7 (Auditing, Explainability, Version control)Integrate securely, deploy responsibly, monitor performance and maintain auditability.
Scalability & transition to productionOperate phase: Deploy & monitor, Whole-of-lifecycle Statements 4, 7, 8 (Auditing, Versioning, Watermarking)Plan scalability, embed operational monitoring, maintain version control.
Sustainment & exit strategyRetire phase: Decommission, Whole-of-lifecycle Statement 4: Audit, Statement 7: Version controlPlan for decommissioning, preserve audit trails, archive responsibly.

Return to

Guidance for AI proof of concept to scale

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.