-
Notes:
Requirements for handling personal and sensitive data within AI systems are included in the Privacy Act, the Australian Privacy Principles, Privacy and Other Legislation Amendment Act 2024 and the Handling personal information guidance.
Data archival and destruction must comply with the Information management legislation.
The Framework for the Governance of Indigenous Data provides guidelines on Indigenous data sovereignty.
The Office of the Australian Information Centre (OAIC) provides Guidelines on data matching in Australian Government administration, which agencies must consider prior to data integration and fusion activities.
The Information management for records created using Artificial Intelligence (AI) technologies | naa.gov.au provides guidelines to manage data for AI.
The Data Availability and Transparency Act 2022 (DATA Scheme) requires agencies to identify data as open, shared, or closed.
The Guidelines for data transfers | Cyber.gov.au provide guidance on the processes and procedures for data transfers and transmissions.
The APS Data Ethics Use Cases provide guidance for agencies to manage and mitigate data bias.
The report on Responding to societal challenges with data | OECD provides guidance on data access, sharing, and reuse of data.
-
Train: statements 20 - 25
-
The train stage covers the creation and selection of models and algorithms. The key activities in this stage include modelling, pre- and post-processing, model refinements, and fine-tuning. It also considers the use of pre-trained models and associated fine-tuning for the operational context.
-
Exemptions
The DTA acknowledge that some agencies may be unable to meet one or more of the criteria set out by the Digital Service Standard due to a range of circumstances. These circumstances may include but are not limited to:
- legacy technology barriers that the agency cannot reasonably overcome
- substantial financial burden caused by changing a service to meet criteria.
Exemptions may be granted for one or more of the criteria set out by the Digital Service Standard. This will be assessed on a case-by-case basis. Exemptions must be applied for through the DTA.
Further information can be found in the Digital Experience Policy Exemption Guide.
Note: Even if a service or website is not covered by the Digital Service Standard, or an exemption is received, obligations may still apply under relevant Australian legislation, for example accessibility requirements under the Disability Discrimination Act 1992.
Off -
AI training involves processing large amounts of data to enable AI models to recognise patterns, make predictions, draw inferences, and generate content. This process creates a mathematical model with parameters that can range from a few to trillions. Training an AI model might require adjustment of these parameters, entailing increased processing power and storage.
Training a model can be compute-heavy, relying on infrastructure that may be significantly expensive. The model architecture, including choice of the AI algorithm and learning strategy, together with the size of the model dataset, will influence the infrastructure requirements for the training environment.
The AI Model encapsulates a complex mathematical relationship between input and output data that it derives from patterns in a modelling dataset. AI models can be chained together to provide more complex capabilities.
Pre-processing and post-processing augment the capabilities of the AI model. Application, platform, and infrastructure components are shown here as well as they all contribute to the overall behaviour and performance of the whole AI system.
Due to the number of mathematical computations involved and time taken to execute them, training can be a highly intensive stage of the AI lifecycle. This will depend on the infrastructure resources available, the algorithms used to train the AI model and the size of the training datasets.
Key considerations during this stage include:
- the model architecture, including the AI model and how components within the model interact, as well as the use of off-the-shelf or pre-trained models
- selection and development of the algorithms and learning strategies used to train the AI model
- an iterative process of implementing model architecture, setting hyperparameters, and training on model datasets
- model validation tests, supplemented by human evaluation, which evaluate whether the model is fit-for-purpose and reliable
- trained model selection assessments, which streamline development and enhance capabilities by comparing various models for the AI system
- continuous improvement frameworks which set processes for measuring model outputs, business, and user feedback to manage model performance.
If after multiple attempts of refinement, the model does not meet requirements or success criteria, a new model may need to be created, business requirements updated, or the model is retired.
See the Design lifecycle stage for details on measuring model outputs, as well as business and user feedback, to manage AI model performance.
See the Apply version control practices statement in the Whole of AI lifecycle section for detail on tracking changes to training models, trained models, algorithms, learning types, and hyperparameters.
-
Statement 20: Plan the model architecture
-
Lifecycle stages
Introduces each lifecycle and it's accompanying statements noting which are required and which are recommended.
-
A - E | F - P | Q - Z
-
Whole of AI lifecycle
-
-
-
-
Agencies must
Criterion 6: Identify and assign AI roles to ensure a diverse team of business and technology professionals with specialised skills.
Specialist roles may include, noting that an individual may perform one or more of these roles:
- AI accountable official: A senior executive accountable for their agency’s implementation of the Policy for the responsible use of AI in government
- Data scientists and analysts: Professionals who collect, process, and analyse datasets to inform AI models. They will have expertise in statistical analysis which supports the development of reliable AI systems
- AI integration engineers: Professionals responsible for planning, designing and implementing all components requiring integration in an AI system. The role includes reviewing client needs, developing and testing specifications and documenting outputs
- AI and machine language engineers: Specialists who design, build, and maintain AI models and algorithms. They work closely with data scientists to implement scalable AI systems
- AI test engineers: Specialists who verify and validate AI systems against business and technical requirements
- Ethics and compliance officers: Specialists who ensure that AI systems adhere to legal standards and ethical guidelines, mitigating risks associated with AI systems
- Domain experts: Individuals with specialised knowledge in specific fields, such as healthcare or finance, who provide context and insights to ensure that AI systems are relevant and effective within their respective domain.
Criterion 7: Build and maintain AI capabilities by undertaking regular training and education of end users, staff, and stakeholders.
This may involve:
- agencies should provide regular training programs keep staff updated on the latest tools, methodologies, ethical guidelines and regulatory requirements
- consider how to tailor training to the knowledge requirements of each role and provide staff involved in procurement, design, development, testing, and deployment of AI systems with specialised training. For example, individuals responsible for managing and operating AI decision-making systems should undergo specific AI ethics training
- consider tailoring training for people with disability
- consider interactive workshops, simulations, case study walk-throughs and computing sandpit environments to provide more immersive and real-world-like experiences especially for more complex aspects of AI.
Agencies should:
Criterion 8: Mitigate staff over reliance, under reliance, and aversion of AI.
This may involve:
- perform periodic technology-specific training, performance assessments, peer reviews, or random audits
- implement a regular feedback loop for incorrect AI outcomes.
-
Statement 4: Enable AI auditing
-
Statement 4: Enable AI auditing
-
Agencies must:
Criterion 9: Provide end-to-end auditability.
End-to-end AI auditability refers to the ability to trace and inspect the decisions and processes involved in the AI system lifecycle. This enables internal and external scrutiny. Publishing audit results enables public accountability, transparency, and trust.
This may include:
- establishing documentation across the AI system lifecycle as agreed with the accountable official. This should demonstrate conformance with the AI technical standard, and compliance with relevant legislation and regulations.
- establishing traceability of decisions and changes from requirements through to operational impacts
- ensuring accessibility, availability, and explainability of technical and non-technical information to assist audits
ensuring audit logging of the AI tools and systems are configured appropriately
This may include:
- enabling or disabling the capture of system inputs and outputs
- detect and record modifications to the system’s operation or performance
- record who made the modification, under what authority, and the rationale for the modification
- record system version and any other critical system information.
- reviewing of audit logs
- ensuring independence and avoiding conflict of interest when undertaking AI audits.
Criterion 10: Perform ongoing data-specific checks across the AI lifecycle.
This should address:
- data quality for AI training, capabilities, and limitations
- how data was evaluated for bias
- controls to detect and manage data poisoning
- legislative compliance.
Criterion 11: Perform ongoing model-specific checks across the AI lifecycle.
This should address:
- track and maintain experiments with new models and algorithms to ensure reproducibility, achieving similar model performance with the same dataset
- output flaws such as factually incorrect, nonsensical, or misleading information, which may be referred to as AI hallucinations
- bias and potential harms, such as ensuring fair treatment of all demographic groups
- model explainability
- controls to detect and manage model poisoning
- legislative compliance.
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.