Criterion 6: Identify and assign AI roles to ensure a diverse team of business and technology professionals with specialised skills.
Specialist roles may include, noting that an individual may perform one or more of these roles:
Criterion 7: Build and maintain AI capabilities by undertaking regular training and education of end users, staff, and stakeholders.
This may involve:
Criterion 8: Mitigate staff over reliance, under reliance, and aversion of AI.
This may involve:
Criterion 9: Provide end-to-end auditability.
End-to-end AI auditability refers to the ability to trace and inspect the decisions and processes involved in the AI system lifecycle. This enables internal and external scrutiny. Publishing audit results enables public accountability, transparency, and trust.
This may include:
ensuring audit logging of the AI tools and systems are configured appropriately
This may include:
Criterion 10: Perform ongoing data-specific checks across the AI lifecycle.
This should address:
Criterion 11: Perform ongoing model-specific checks across the AI lifecycle.
This should address:
Criterion 12: Explain the AI system and technology used, including the limitations and capabilities of the system.
AI algorithms and technologies such as deep learning models, are often seen as 'black boxes'. This can make it difficult to understand how they work and the factors that generate outcomes. Providing clear and understandable explanations of AI outputs helps maintain trust and transparency with AI systems.
Explainability on the specific context of the use case ensures clear understanding and reasoning behind AI system output. This supports accountability, trust, and ethical considerations.
This may include:
Criterion 13: Explain outputs made by the AI system to end users.
This typically includes:
Criterion 14: Explain how data is used and shared by the AI system.
This includes:
Management of bias and its potential harms of an AI system is critical to ensuring compliance with federal anti-discrimination legislation. Australia’s anti-discrimination law states:
…it is unlawful to discriminate on the basis of a number of protected attributes including age, disability, race, sex, intersex status, gender identity and sexual orientation in certain areas of public life, including education and employment.
Certain forms of bias, such as affirmative measures for disadvantaged or vulnerable groups, play a constructive role in aligning AI systems to human values, intentions, and ethical principles. At the same time, it’s important to identify and address biases that may lead to unintended or harmful consequences. A balanced approach to bias management ensures that beneficial biases are preserved while minimising the impact of problematic ones.
When integrating off-the-shelf AI products, it’s essential to ensure they deliver fair and equitable outcomes in the targe operating environment. Conducting thorough bias evaluations becomes especially important when documentation or supporting evidence is limited.
Criterion 15: Identify how bias could affect people, processes, data, and technologies involved in the AI system lifecycle.
Systemic biases: are rooted in societal and organisational culture, procedures, or practices that disadvantage or benefit specific cohorts. These biases manifest in datasets and in the processes throughout the AI lifecycle.
Human bias: can affect design decisions, data collection, labelling, test selection, or any process that require judgment throughout the AI lifecycle. They could be conscious (implicit) or unconscious (explicit).
Statistical and computational bias occurs when data used to train an AI system is not representative of the population. This is explored in more depth in the data section.
This includes:
Criterion 16: Assess the impact of bias on your use case.
This typically involves:
Criterion 17: Manage identified bias across the AI system lifecycle.
For off-the-shelf products, AI deployers should ensure that the AI system provides fair outcomes. Evaluating for bias will be critical where insufficient documentation from the off-the-shelf AI model supplier is provided.
This involves:
Version control is a process that tracks and manages changes to information such as data, models, and system code. This allows business and technical stakeholders to identify the state of an AI system when decisions are made, restore previous versions, and restore deleted or overwritten files.
AI system versioning can extend beyond traditional coding practices, which manages a package of identifiable code or configuration information. Version control for information such as training data, models, and hyperparameters will need to be considered.
Information across the AI lifecycle, that was used to generate a decision or outcome, must be captured. This applies to all AI products, including low code or no code third-party tools.
Criterion 18: Apply version management practices to the end-to-end development lifecycle.
Australian Government API guidelines mandate the use of semantic versioning. They should be enhanced to cater for AI related information and processes.
Version standards should clearly document the difference between production and non-production data, models and code.
This involves applying version management practices to:
Criterion 19: Use metadata in version control to distinguish between production and non-production data, models, and code.
This includes:
Criterion 20: Use a version control toolset to improve useability for users.
Version toolsets improve the usability for service delivery and business users, addressing activities such as appeals, Ministerial correspondence, executive briefs, court cases, audit, assurance, privacy, and legislative reviews
This includes:
Criterion 21: Record version control information in audit logs.
This includes: