• Statement 6: Manage system bias

  • Statement 6: Manage system bias 

  • Management of bias and its potential harms of an AI system is critical to ensuring compliance with federal anti-discrimination legislation. Australia’s anti-discrimination law states: 

    …it is unlawful to discriminate on the basis of a number of protected attributes including age, disability, race, sex, intersex status, gender identity and sexual orientation in certain areas of public life, including education and employment. 

    Certain forms of bias, such as affirmative measures for disadvantaged or vulnerable groups, play a constructive role in aligning AI systems to human values, intentions, and ethical principles. At the same time, it’s important to identify and address biases that may lead to unintended or harmful consequences. A balanced approach to bias management ensures that beneficial biases are preserved while minimising the impact of problematic ones.

    When integrating off-the-shelf AI products, it’s essential to ensure they deliver fair and equitable outcomes in the targe operating environment. Conducting thorough bias evaluations becomes especially important when documentation or supporting evidence is limited.

    Agencies must:

    • Criterion 15: Identify how bias could affect people, processes, data, and technologies involved in the AI system lifecycle.

      Systemic biases: are rooted in societal and organisational culture, procedures, or practices that disadvantage or benefit specific cohorts. These biases manifest in datasets and in the processes throughout the AI lifecycle.

      Human bias: can affect design decisions, data collection, labelling, test selection, or any process that require judgment throughout the AI lifecycle. They could be conscious (implicit) or unconscious (explicit).

      Statistical and computational bias occurs when data used to train an AI system is not representative of the population. This is explored in more depth in the data section.

      This includes: 

      • establishing a bias management plan, outlining how bias will be identified, assessed, and managed across the AI system lifecycle
      • checking for systemic bias, which are rooted in societal and organisational culture, procedures, and practices that disadvantage or benefit specific cohorts. These biases manifest in datasets and in the processes throughout the AI lifecycle
      • checking for algorithmic bias in decision-making systems, where an output from an AI system might produce incorrect, unfair or unjustified results
      • checking for human bias, which can be conscious or unconscious biases in design decisions, data collection, labelling, test selection, or any process that requires judgment throughout the AI lifecycle
      • checking for statistical and computational bias, which can occur when data used to train an AI system is not representative of the population
      • checking for bias based on the application of AI, such identifying cognitive bias in a computer vision system
      • considering intended bias, such as identifying specific circumstances for a person or a group
      • considering inherent bias when reusing pre-trained AI models.
         
      • Examples of sources of bias includes:
        • Cognitive bias – systematic human inclinations or reasoning, such as subconscious judgements based on the current norms of individuals. Based on how people interpret and understand information in their surroundings, such as only using data that reinforces an individual's belief
        • Authority bias – tendency to provide greater weighting or consideration of information from an authority source
        • Availability bias – providing undue weighting to information or processes that they are, or have been, actively involved with
        • Confirmation bias – tendency to interpret, favour, or seek out information that reinforces a personal belief, value or understanding, such as a political alignment
        • Contextual bias – reliance upon unnecessary or irrelevant information which may unduly influence a decision
        • In-group or labelling bias – preferential treatment is provided to those who belong in the same group. Adversely, out-group bias is where unfavourable treatment is provided to those who belong in other groups
        • Stereotype bias – generalisations about an individual or group of people based on shared characteristics, such as age, gender, or ethnicity
        • Anchoring bias – tendency to rely heavily on the first piece of information they receive
        • Group think – tendency for people to strive for consensus within a group
        • Automation bias – tendency to rely on automated systems and ignore contradictory information made without automation.
    • Criterion 16: Assess the impact of bias on your use case.

      This typically involves:

      • identifying stakeholders and the potential harms to them
      • identifying existing countermeasures and assessing their effectiveness
      • engaging with diverse and multi-disciplinary stakeholders in assessing the potential impacts of bias
      • using bias assessment tools relevant to your use case.
    • Criterion 17: Manage identified bias across the AI system lifecycle.

      For off-the-shelf products, AI deployers should ensure that the AI system provides fair outcomes. Evaluating for bias will be critical where insufficient documentation from the off-the-shelf AI model supplier is provided.

      This involves:

      • engaging multi-disciplinary skillsets and diverse perspectives, including:
        • policy owners, legal, architecture, data, IT experts, program managers, service delivery professionals, subject matter experts
        • people with lived experience, for example people with disability, gender or sexual diversity and people who are culturally and linguistically diverse.
      • implementing multiple approaches to reduce automation bias and monitor to detect unwanted bias that might emerge
      • identifying bias-specific documentation requirements such as data and model provenance records:
        • document selection criteria for selecting stakeholders, metrics, and other design-related decisions
        • document any discarded requirement, design, data, model, or tests with corresponding rationale
        • document biases that resulted in decommissioning the data, the model, the application, or the system
      • performing periodic context-based bias awareness training for teams
      • consideration of lifecycle stage-specific mitigations, including:
        • identify and validate root causes of bias before addressing them
        • identify corrective and preventive actions corresponding to the root causes of bias
        • identify fairness metrics at design. Performance metrics, such as accuracy and precision, aggregated over the entire dataset could hide bias. For example, a cancer detecting device with 90 per cent accuracy averaged across the entire dataset could hide underperformance on a minority population. Disaggregating performance metrics into suitable attributes can detect whether a system performs fairly across demographics, environmental conditions, and other risk factors
        • analyse data for bias and fix issues in the data. See Model and Context dataset section  for more information
        • test independence strategy, functional performance testing, fairness testing, and user acceptance testing
        • configure, calibrate, and monitor bias-related metrics during phased roll-out
        • monitor bias-related metrics and unintended consequences during operations. Provide mechanisms for end-users to report and escalate experiences of bias.
        • audit for how risks of bias are identified, assessed, and mitigated throughout the lifecycle.
        • find and use suitable tools that discover and test for unwarranted associations between the AI system outputs and protected input features
        • implement bias mitigation techniques after harmful bias has been identified
        • implement bias mitigation thresholds that can be configured post-deployment to ensure equity for cohorts, such as people with lived experience.
           
  • Statement 7: Apply version control practices

  • Statement 7: Apply version control practices

  • Version control is a process that tracks and manages changes to information such as data, models, and system code. This allows business and technical stakeholders to identify the state of an AI system when decisions are made, restore previous versions, and restore deleted or overwritten files.

    AI system versioning can extend beyond traditional coding practices, which manages a package of identifiable code or configuration information. Version control for information such as training data, models, and hyperparameters will need to be considered.

    Information across the AI lifecycle, that was used to generate a decision or outcome, must be captured. This applies to all AI products, including low code or no code third-party tools.

    Agencies must

    • Criterion 18: Apply version management practices to the end-to-end development lifecycle.

      Australian Government API guidelines mandate the use of semantic versioning. They should be enhanced to cater for AI related information and processes.

      Version standards should clearly document the difference between production and non-production data, models and code.

      This involves applying version management practices to:

      • the model, training and operation dataset, data in the AI system, training algorithm, and hyperparameters
      • maintaining design documentation outlining the end-to-end AI system state in line with existing organisational control mechanisms
      • include point-in-time date and timestamps to data and any changes in data
      • authorship, relevant licencing details, and changes since last version
      • capturing approvals from accountable officials for workflow and model reviews, datasets used for training, and relevant hyperparameters
      • managing any data poisoning and AI poisoning
      • data versioning supporting AI interoperability should include the following:
        • consistency: data structures, exchanges, and formats across different sources are well-defined
        • integration: enables data sourced from different sources to be integrated in a seamless manner
        • all documents relating to the establishment, design, and governance of an AI implemented solution must be retained as per the Archives Act 1983.
    • This does not apply to:
      • third-party software products, which are subject to existing controls.

    Agencies should

    • Criterion 19: Use metadata in version control to distinguish between production and non-production data, models, and code.

      This includes:

      • a simple and transparent way for all users of the system to understand the version of each component at the time a decision was made
      • the use of tags in the version number to provide a visual representation of non-production versions without needing direct access to data or source control toolsets
      • the use of metadata can also distinguish between different control states where outputs can vary, and core system functionality of the system has not changed.
    • Criterion 20: Use a version control toolset to improve useability for users. 

      Version toolsets improve the usability for service delivery and business users, addressing activities such as appeals, Ministerial correspondence, executive briefs, court cases, audit, assurance, privacy, and legislative reviews

      This includes:

      • using purpose built in-house or commercial version management products
      • storing sufficient information to allow rollback to a previous system state
      • considering archival requirements of training data used in a test environment. 
    • Criterion 21: Record version control information in audit logs.

      This includes:

      • use of a commit hash to identify the control state of all elements, to reduce the volume and complexity of audit log data
      • recording AI predictions and actions taken
      • pro-active data analytics to be processed against the audit logs, to monitor and assess ongoing AI system performance
      • where low code or no code third-party tools are used.
  • Statement 8: Apply watermarking techniques

  • Statement 8: Apply watermarking techniques

  • AI watermarking can be used to embed visual or hidden markers into generated content, so that its creation details can be identified. It provides transparency, authenticity, and trust to content consumers.

    Visual watermarks or disclosures provide a simple way for someone to know they are viewing content created by, or interacting with, an AI system. This may include generated media content or GenAI systems.

    The Coalition for Content Provenance and Authenticity (C2PA)  is developing an open technical standard for publishers, creators, and consumers to establish the origin and edits of digital content. Advice on the use of C2PA is out of scope for the standard.

    Agencies must:

    • Criterion 22: Apply visual watermarks and metadata to generated media content to provide transparency and provenance, including authorship.

      This will only apply where AI generated content may directly impact a user. For instance, using AI to generate a team logo would not need to be watermarked.

    • Criterion 23: Apply watermarks and metadata that are WCAG compatible where relevant

    • Criterion 24: Apply visual and accessible content to indicate when a user is interacting with an AI system.
      For example, this may include adding text to a GenAI interface so that users are aware they are interacting with an AI system rather than a human.

    Agencies should:

    • Criterion 25: For hidden watermarks, use watermarking tools based on the use case and content risk.

      This includes:

      • including provenance and authorship information
      • encrypting watermarks for high-risk content
      • using an existing tool or technique when practicable
      • embedding watermarks at the AI training stage to improve their effectiveness and allows additional information such as content modification to be included
      • verifying that the watermark does not impact the quality or efficiency of content generation, such as image degradation or text readability
      • including data sources, such as publicly available content used for AI training to manage copyright risks, and product details such as versioning information.
    • Criterion 26: Assess watermarking risks and limitations.

      This includes:

      • ensuring users understand there is a risk of third parties replicating a visual watermark and to not over rely on watermarks, such as sourcing content from external sources
      • preventing third-party use of watermarking algorithms to create their own content and act as the original content creator
      • consider situations where watermarking is not beneficial. For example, watermarking can be visually distracting for decision makers, or when its overused in low-risk applications
      • consider situations where malicious actors might remove or replicate the watermark to reproduce content generated by AI
      • managing copyright or trademark risks related to externally sourced data.
  • Statement 9: Conduct pre-work

  • Statement 9: Conduct pre-work

  • Agencies must:

    • Criterion 27: Define the problem to be solved, its context, intended use, and impacted stakeholders.

      This includes:

      • analysing the problem through problem-solving frameworks such as root cause analysis, design thinking, and DMAIC (define, measure, analyse, improve, control)
      • define user needs, system goals and the scope of AI in the system
      • identifying and documenting stakeholders, including:
        • internal or external end-users, such as APS staff or members of the public
        • indigenous Australians, refer to Framework for Governance of Indigenous Data
        • people with lived experiences, including those defined by religion, ethnicity, or migration status
        • data experts, such as owners of the data being used to train and validate the AI system
        • subject matter experts, such as internal staff
        • the development team, including SROs, architects, and engineers.
      • understanding the context of the problem such as interacting processes, data, systems, and the internal and external operating environment
      • phrasing the problem in a way that is technology agnostic.
    • Criterion 28: Assess AI and non-AI alternatives.

      This includes:

      • starting with the simplest design, experimenting, and iterating
      • validate and justify the need for AI by conducting an objective quality evidence assessment
      • differentiating parts that could be solved by traditional software from parts that could benefit from AI
      • determine why using AI would be more beneficial over non-AI alternatives by comparing KPIs
      • considering the interaction of any AI and non-AI components
      • considering existing agency solutions, commercial, or open-source off-the-shelf products
      • examining capabilities, performance, cost, and limitations of each option
      • conducting proof of concept and pilots to assess and validate the feasibility of each option
      • for transformative use cases, consider foundation and frontier models. Foundation models are quite versatile, trained on large data sets, and can be fine-tuned for specific contexts. Frontier models are at the forefront of AI research and development, trained on extensive datasets, and may demonstrate creativity or reasoning.
    • Criterion 29: Assess environmental impact and sustainability. 

      Developing and using AI systems may have corresponding trade-offs with electricity usage, water consumption, and carbon emissions.

    • Criterion 30: Perform cost analysis across all aspects of the AI system.

      This includes:

      • infrastructure, software, and tooling costs for:
        • acquiring and processing data for training, validation, and testing
        • tuning the AI system to your particular use case and environment
        • internally or externally hosting the AI system
        • operating, monitoring, and maintaining the AI system.
      • cost of human resources with the necessary AI skills and expertise.
    • Criterion 31: Analyse how the use of AI will impact the solution and its delivery.

      This includes:

      • identifying the type of AI and classification of data required
      • identifying the implications of integrating the AI system with existing departmental systems and data, or as a standalone system
      • identifying legislation, regulations, and policies.
  • Statement 10: Adopt a human-centred approach

  • Statement 10: Adopt a human-centred approach

  • Agencies must

    • Criterion 32: Identify human values requirements.

      Human values represent what people deem important in life such as autonomy, simplicity, tradition, achievement, and social recognition.

      This includes:

      • using traditional requirement elicitation techniques such as surveys, interviews, group discussions and workshops to capture relevant human values for the AI use case
      • translating human values into technical requirements, which may vary depending on the risk level and AI use case
      • reviewing feedback to identify ignored human-values in the AI system
      • understanding the hierarchy of human values and emphasising those with higher relevance
      • considering social, economic, political, ethical, and legal values when designing AI systems
      • considering human values that are domain specific and based on the context of the AI system.
    • Criterion 33: Establish a mechanism to inform users of AI interactions and output, as part of transparency.

      Depending on use case this may include:

      • incorporating visual cues on the AI product when applicable
      • informing users when text, audio or visual messages addressed to them are generated by AI
      • including visual watermarks to identify content generated by AI
      • providing transparency on whether a user is interacting with a person, or system
      • including a disclaimer on the limitations of the system
      • displaying the relevance and currency of the information being provided
      • persona level transparency adhering to need-to-know principles
      • providing alternate channels where a user chooses not to use the AI system. This may include channels such as a non-AI digital interface, telephony, or paper.
    • Criterion 34: Design AI systems to be inclusive, ethical, and meets accessibility standards using appropriate mechanisms. 

      This includes:

      • identifying affirmative actions or preferential treatment that apply for any person or specific stakeholder groups
      • ensuring diversity and inclusion requirements, and guidelines, are met throughout the entire AI lifecycle
      • providing justification to situations such as pro-social policy outcomes
      • reviewing and revisiting ethical considerations throughout the AI system lifecycle.
    • Criterion 35: Define feedback mechanisms.

      This includes:

      • providing options to users on the type of feedback method they prefer
      • providing users with the choice to dismiss feedback
      • provide the user with the option to opt-out of the AI system
      • ensuring measures to protect personal information and user privacy
      • capturing implicit feedback to reflect user's preferences and interactions, such as accepting or rejecting recommendations, usage time, or login frequency
      • capturing explicit feedback via surveys, comments, ratings, or written feedback.
    • Criterion 36: Define human oversight and control mechanisms.

      This includes:

      • identifying conditions and situations that need to be supervised and monitored by a human, conditions that need to be escalated by the system to a supervisor or operator for further review and approval, and conditions that should trigger transfer of control from the AI system to a supervisor or operator
      • defining the system states, errors, and other relevant information that should be observable and comprehensible to an informed human
      • defining the pathway for the timely intervention, decision override, or auditable system takeover by authorised internal users
      • subsets of inputs and outputs that may result in harm should be recorded for monitoring, auditing, contesting, or validation. This will facilitate reviewing of false positives against inputs that triggered them, and of false negatives that result in harms
      • identifying situations where a supervising human might become disengaged and designing the system to attract the operators attention
      • map human oversight and control requirements to corresponding risks they mitigate
      • identifying required personas and defining their roles
      • adherence to privacy and security need-to-know principles.

    Agencies should:

    • Criterion 37: Involve users in the design process.

      The intention is to promote better outcomes for managing inclusion and accessibility by setting expectations at the beginning of the AI system lifecycle.

      This includes:

      • considering security guidance and the need-to-know principle
      • involving users in defining requirements, evaluating, and trialling systems or products.
         
  • Statement 11: Design safety systemically

  • Statement 11: Design safety systemically

  • Criterion 2 - Motivate digital use

    Understand the motivations of your audience, communicate the benefits of adopting a digital channel, and ensure that said channel is easy to use.

    Off
  • Agencies must:

    • Criterion 38: Analyse and assess harms.

      This includes:

      • Utilising functional safety standards that provide frameworks for a systematic and robust harms analysis.
    • Criterion 39: Mitigate harms by embedding mechanisms for prevention, detection, and intervention.

      This includes:

      • designing the system to avoid the sources of harm
      • designing the system to detect the sources of harm
      • designing the system to check and filter its inputs and outputs for harm
      • designing the system to check for sensitive information disclosure
      • designing the system to monitor faults in its operation
      • designing the system with redundancy
      • designing intervention mechanisms such as warnings to users and operators, automatic recovery to a safe state, transfer of control, and manual override
      • designing the system to log the harms and faults it detects
      • designing the system to disengage safely as per requirements
      • for physical systems, designing proper protective equipment and procedures for safe handling
      • ensuring the system meets privacy security requirements and adheres to the need-to-know principle for information security.

    Agencies should

    • Criterion 40: Design the system to allow calibration at deployment.

      This includes:

      • where initial setup parameters are critical to the performance, reliability, and safety of the AI system.
         
  • Statement 12: Define success criteria

  • Statement 12: Define success criteria

  • Agencies must

    • Criterion 41: Identify, assess, and select metrics appropriate to the AI system.

      Relying on a single metric could lead to false confidence, while tracking irrelevant metrics could lead to false incidents. To mitigate these risks, analyse the capabilities and limitations of each metric, select multiple complementary metrics, and implement methods to test assumptions and to find missing information.

      Considerations for metrics includes:

      • value-proposition metrics – benefits realisation, social outcomes, financial measures, or productivity measures
      • performance metrics – precision and recall for classification models, mean absolute error for regression models, or bilingual evaluation understudy (BLEU) for text generation. This can include summarisation tasks, inception score for image generation models, or mean opinion score for audio generation
      • training data metrics – data diversity and data quality related measures
      • bias-related metrics – demographic parity to measure group fairness, fairness through awareness to measure individual fairness, counterfactual fairness to measure causality-based fairness
      • safety metrics – likelihood of harmful outputs, adversarial robustness, or potential data leakage measures
      • reliability metrics – availability, latency, mean time between failures (MTBF), mean time to failure (MTTF), or response time
      • citation metrics – measures related to proper acknowledgement and references to direct content and specialised ideas
      • adoption-related metrics – adoption rate, frequency of use, daily active users, session length, abandonment rate, or sentiment analysis
      • human-machine teaming metrics – total time or effort taken to complete a task, reaction time when human control is needed, or number of times human intervention is needed
      • qualitative measures – checking the well-being of the humans operating or using the AI system, or interviewing participants and observing them while using the AI system to identify usability issues
      • drift in AI system inputs and outputs - changes in input distribution, outputs, and performance over time.
         
    • After metrics have been identified, understand and assess the trade-offs between the metrics.

      This includes:

      • assessing trade-offs between different success criteria
      • determining the possible harms with incorrect output, such as a false positive or false negative
      • analysing how the output of the AI system could be used. For example, determine which instance would have greater consequences: a false negative that would fail to detect a cyberattack; or a false positive that incorrectly flags a legitimate user as a threat
      • assessing the trade-offs among the performance metrics
      • understanding the trade-offs with costs, explainability, reliability, and safety
      • understanding the limitations of the selected metric and ensure measures are considered when building the AI system, such as selecting data and training methods
      • trade-offs are documented, understood by stakeholders, and accounted for in selecting AI models and systems
      • optimising the metrics appropriate to the use case.

    Agencies should

    • Criterion 42: Reevaluate the selection of appropriate success metrics as the AI system moves through the AI lifecycle.

    • Criterion 43: Continuously verify correctness of the metrics.

      Before relying on the metrics, verify the following:

      • metrics are accurately reflected when the AI system does not have enough information
      • metrics correctly reflect errors, failures, and successful task performance.
  • Statement 13: Establish data supply chain management processes

  • Statement 13: Establish data supply chain management processes

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.