Statement 9: Conduct pre-work
Agencies must:
Criterion 27: Define the problem to be solved, its context, intended use, and impacted stakeholders.
This includes:
- analysing the problem through problem-solving frameworks such as root cause analysis, design thinking, and DMAIC (define, measure, analyse, improve, control)
- define user needs, system goals and the scope of AI in the system
- identifying and documenting stakeholders, including:
- internal or external end-users, such as APS staff or members of the public
- indigenous Australians, refer to Framework for Governance of Indigenous Data
- people with lived experiences, including those defined by religion, ethnicity, or migration status
- data experts, such as owners of the data being used to train and validate the AI system
- subject matter experts, such as internal staff
- the development team, including SROs, architects, and engineers.
- understanding the context of the problem such as interacting processes, data, systems, and the internal and external operating environment
- phrasing the problem in a way that is technology agnostic.
Criterion 28: Assess AI and non-AI alternatives.
This includes:
- starting with the simplest design, experimenting, and iterating
- validate and justify the need for AI by conducting an objective quality evidence assessment
- differentiating parts that could be solved by traditional software from parts that could benefit from AI
- determine why using AI would be more beneficial over non-AI alternatives by comparing KPIs
- considering the interaction of any AI and non-AI components
- considering existing agency solutions, commercial, or open-source off-the-shelf products
- examining capabilities, performance, cost, and limitations of each option
- conducting proof of concept and pilots to assess and validate the feasibility of each option
- for transformative use cases, consider foundation and frontier models. Foundation models are quite versatile, trained on large data sets, and can be fine-tuned for specific contexts. Frontier models are at the forefront of AI research and development, trained on extensive datasets, and may demonstrate creativity or reasoning.
- Criterion 29: Assess environmental impact and sustainability.
Developing and using AI systems may have corresponding trade-offs with electricity usage, water consumption, and carbon emissions. Criterion 30: Perform cost analysis across all aspects of the AI system.
This includes:
- infrastructure, software, and tooling costs for:
- acquiring and processing data for training, validation, and testing
- tuning the AI system to your particular use case and environment
- internally or externally hosting the AI system
- operating, monitoring, and maintaining the AI system.
- cost of human resources with the necessary AI skills and expertise.
- infrastructure, software, and tooling costs for:
Criterion 31: Analyse how the use of AI will impact the solution and its delivery.
This includes:
- identifying the type of AI and classification of data required
- identifying the implications of integrating the AI system with existing departmental systems and data, or as a standalone system
- identifying legislation, regulations, and policies.