5. Fairness
5.1 Defining fairness
Fairness is a core principle in the design and use of AI systems, but it is a complex and contextual concept. Australia’s AI Ethics Principles state that AI systems should be inclusive and accessible and should not involve or result in unfair discrimination. However, there are different and sometimes conflicting definitions of fairness and people may disagree on what is fair.
For example, there is a distinction between:
- individual fairness – treating individuals similarly
- group fairness – similar outcomes across different demographic groups.
Different approaches to fairness involve different trade-offs and value judgments. The most appropriate fairness approach will depend on the specific context and objectives of your AI use case.
When defining fairness for your AI use case, you should be aware that AI models are typically trained on broad sets of data that may contain bias. Bias can arise in data where it is incomplete, unrepresentative or reflects societal prejudices. AI models may reproduce biases present in the training data, which can lead to misleading or unfair outputs, insights or recommendations. This may disproportionally impact some groups, such as First Nations people, people with disability, LGBTIQ+ communities and culturally and linguistically diverse communities. For example, an AI tool used to screen job applicants might systematically disadvantage people from certain backgrounds if trained on hiring data that reflects past discrimination.
When defining fairness for your AI use case, consider the inclusivity and accessibility of the AI. AI can lead to unfairness if it creates barriers for individuals or groups who wish to access government services. For example, an AI chatbot designed to provide social security information may produce unfair outcomes because it is more difficult for vulnerable or underrepresented groups to access the digital technologies required to access the chatbot.
When defining fairness for your AI use case, it is recommended that you:
- consult relevant domain experts, affected parties and stakeholders (such as those you have identified at assessment section 2.4) to help you understand the trade-offs and value judgements that may be involved
- document your definition of fairness in your response to assessment section 5.1, including how you have balanced competing priorities and why you believe it to be appropriate to your use case
- be transparent about your fairness definition and be open to revisiting it based on stakeholder feedback and real-world outcomes.
You should also ensure that your definition of fairness complies with anti-discrimination laws. In Australia, it is unlawful to discriminate on the basis of a number of protected attributes including age, disability, race, sex, intersex status, gender identity and sexual orientation, in certain areas of public life including education and employment. Australia’s federal anti‑discrimination laws are contained in the following legislation:
- Age Discrimination Act 2004
- Disability Discrimination Act 1992
- Racial Discrimination Act 1975
- Sex Discrimination Act 1984.
Where the AI will produce information or be involved in decision-making, you should also ensure that your definition of fairness reflects the administrative law principle of procedural fairness, which requires that decision-making is transparent and challengeable.
5.2 Measuring fairness
You may be able to use a combination of quantitative and qualitative approaches to measuring fairness. Quantitative fairness metrics can allow you to compare outcomes across different groups and assess this against fairness criteria. Qualitative assessments, such as stakeholder engagement and expert review, can provide additional context and surface issues that metrics alone might miss.
Quantifying fairness
The specific quantitative metrics you use to measure fairness will depend on the definition of fairness you have adopted for your use case. When selecting fairness metrics, you should:
- choose metrics that align with your fairness definition, recognising the trade offs between different fairness criteria and other objectives like accuracy
- confirm if you have appropriate data to assess those metrics, including compliance with the Australian Privacy Principles where personal or sensitive information is being collected and used
- set clear and measurable acceptance criteria (see guidance for section 6.4)
- establish a plan for monitoring these metrics (see section 6.6) and processes for remediation, intervention or safely disengaging the AI system if those thresholds are not met
For examples of commonly used fairness metrics, see the Fairness Assessor Metrics Pattern from the CSIRO's Data61 unit.
Qualitatively assessing fairness
Consider some of these qualitative approaches, which may be useful to overcome data limitations and to surface issues that metrics may overlook.
Stakeholder engagement
Consult affected communities, stakeholders and domain experts to understand their perspectives and identify potential issues
User testing and feedback
Test your AI system with diverse users and solicit their feedback on the fairness and appropriateness of the system’s outputs. Seek out the perspectives of marginalised groups and groups that may be impacted by the AI system.
Expert review
Engage experts, such as AI ethicists or accessibility and inclusivity specialists, to review the fairness of your system's outputs and the overall approach to fairness. Identify potential gaps or unintended consequences.
Resources
- For advice on bias measurement and minimisation techniques, see the National AI Centre's report on Implementing Australia's AI Ethics Principles.
- CSIRO Data61's Responsible AI Pattern Catalogue includes a Fairness Assessor Metrics Pattern.
- Consider resources on fairness in AI in the OECD Catalogue of Tools & Metrics for Trustworthy AI.