10. Human-centred values

10.1 Incorporating diversity

Diversity of perspective promotes inclusivity, mitigates biases, supports critical thinking, mitigates the risk of non-compliance with anti-discrimination laws and should be incorporated in all AI system lifecycle stages.

AI systems require input from stakeholders from a variety of backgrounds, including different ethnicities, genders, ages, abilities and socio-economic statuses. This also includes people with diverse professional backgrounds, such as ethicists, social scientists and domain experts relevant to the AI application. Determining which stakeholders and user groups to consult, which data to use, and the optimal team composition will depend on your AI system.

Failing to adequately incorporate diversity into relevant AI lifecycle stages can have unintended negative consequences, as illustrated in a number of real-world examples:

  • AI systems ineffective at predicting recidivism outcomes for defendants of colour and underestimating the health needs of patients from marginalised racial and ethnic backgrounds.
  • AI job recruitment systems unfairly affecting employment outcomes.
  • Algorithms used to prioritise patients for high-risk care management programs were less likely to refer black patients than white patients with the same level of health.
  • An AI system designed to detect cancers had shown biases towards lighter skin tones stemming from an oversight in collecting a more diverse set of skin tone images, potentially delaying life-saving treatments.

Resources, including approaches, templates and methods to ensure sufficient diversity and inclusion of your AI system, are described in the NAIC's Implementing Australia's AI Ethics Principles report.

10.2 Human rights obligations

You should consult an appropriate source of advice or otherwise ensure that your AI use case and use of data align with human rights obligations. If you have not done so, explain your reasoning.

It is recommended that you complete this question after you have completed the previous sections of the assessment. This will provide more complete information to enable an assessment of the human rights implications of your AI use case.

In Australia, it is unlawful to discriminate on the basis of a number of protected attributes including age, disability, race, sex, intersex status, gender identity and sexual orientation, in certain areas of public life including education and employment. Australia's federal anti discrimination laws are contained in the following legislation.

Human rights are defined in the Human Rights (Parliamentary Scrutiny) Act 2011 as the rights and freedoms contained in the 7 core international human rights treaties to which Australia is a party, namely the:

  • International Covenant on Civil and Political Rights (ICCPR)
  • International Covenant on Economic, Social and Cultural Rights (ICESCR)
  • International Convention on the Elimination of All Forms of Racial Discrimination (CERD)
  • Convention on the Elimination of All Forms of Discrimination against Women (CEDAW)
  • Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (CAT)
  • Convention on the Rights of the Child (CRC)
  • Convention on the Rights of Persons with Disabilities (CRPD).

In addition to other rights referred to in this guidance, human rights you may consider as part of your assessment of the AI use case include:

  • a right to privacy – for example, where AI is being used for tracking and surveillance)
  • freedom of expression and information – for example, where AI is used to moderate a forum and therefore possibly suppress legitimate forms of expression
  • human agency – for example, where AI makes an automated decision on an individual's behalf.

Next page

11. Accountability

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.