• Under Australia’s AI Ethics Principles, AI systems should throughout their lifecycle be inclusive and accessible and should not involve or result in unfair discrimination against individuals, communities or groups.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    4.1    Defining fairness

    Do you have a clear definition of what constitutes a fair outcome in the context of your use of AI?

    Where appropriate, you should consult relevant domain experts, affected parties and stakeholders to determine how to contextualise fairness for your use of AI. Consider inclusion and accessibility. Consult the guidance document for prompts and resources to assist you.

    4.2    Measuring fairness

    Do you have a way of measuring (quantitatively or qualitatively) the fairness of system outcomes?

    Measuring fairness is an important step in identifying and mitigating fairness risks. A wide range of metrics are available to address various concepts of fairness. Consult the guidance document for resources to assist you.

  • 5.  Reliability and safety

  • Under Australia's AI Ethics Principles, AI systems should throughout their lifecycle reliably operate in accordance with their intended purpose.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    5.1    Data suitability

    If your AI system requires the input of data to operate, or you are training or evaluating an AI model, can you explain why the chosen data is suitable for your use case?

    Consider data quality and factors such as accuracy, timeliness, completeness, consistency, lineage, provenance and volume.

    5.2    Indigenous data 

    If your AI system uses Indigenous data, including where any outputs relate to Indigenous people, have you ensured that your AI use case is consistent with the Framework for Governance of Indigenous Data?

    Consider whether your use of Indigenous data and AI outputs is consistent with the expectations of Indigenous people, and the Framework for Governance of Indigenous Data (GID). See definition of Indigenous data in guidance material.

    5.3    Suitability of procured AI model

    If you are procuring an AI model, can you explain its suitability for your use case?  

    May include multiple models or a class of models. Includes using open-source models, application programming interfaces (APIs) or otherwise sourcing or adapting models. Factors to consider are outlined in guidance.

    5.4    Testing

    Outline any areas of concern in results from testing. If testing is yet to occur, outline elements to be considered in testing plan (for example, the model’s accuracy).

    5.5    Pilot

    Have you conducted, or will you conduct, a pilot of your use case before deploying?

    If answering ‘yes’, explain what you have learned or hope to learn in relation to reliability and safety and, if applicable, outline how you adjusted the use of AI. 

    5.6    Monitoring

    Have you established a plan to monitor and evaluate the performance of your AI system?

    If answering ‘yes’, explain how you will monitor and evaluate performance. 

    5.7    Preparedness to intervene or disengage

    Have you established clear processes for human intervention or safely disengaging the AI system where necessary (for example, if stakeholders raise valid concerns with insights or decisions or an unresolvable issue is identified)?  

    See guidance document for resources to assist you in establishing appropriate processes.

  • 6.  Privacy protection and security

  • Under Australia's AI Ethics Principles, AI systems should throughout their lifecycle respect and uphold privacy rights and data protection, and ensure data security.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    6.1    Minimise and protect personal information

    Are you satisfied that any collection, use or disclosure of personal information is necessary, reasonable and proportionate for your AI use case?  
    See guidance on data minimisation and privacy enhancing technologies.

    6.2    Privacy assessment

    Has the AI use case undergone a Privacy Threshold Assessment or Privacy Impact Assessment?

    6.3    Authority to operate

    Has the AI system been authorised or does it fall within an existing authority to operate in your environment, in accordance with Protective Security Policy Framework (PSPF) Policy 11: Robust ICT systems?

    Engage with your agency’s IT Security Adviser and consider the latest security guidance and strategies for AI use (such as Engaging with AI from the Australian Signals Directorate).

  • 7. Transparency and explainability

  • Under Australia's AI Ethics Principles, there should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI and can find out when an AI system is engaging with them.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    7.1    Consultation

    Have you consulted stakeholders representing all relevant communities or groups that may be significantly affected throughout the lifecycle of the AI use case?

    Refer to the list of stakeholders identified in section 2. Seek out community representatives with the appropriate skills, knowledge or experience to engage with AI ethics issues. Consult the guidance document for prompts and resources to assist you.

    7.2    Public visibility

    Will appropriate information (such as the scope and goals) about the use of AI be made publicly available?

    See guidance document for advice on appropriate transparency mechanisms, information to include and factors to consider in deciding to publish or not publish AI use information.

    7.3    Maintain appropriate documentation and records

    Have you ensured that appropriate documentation and records will be maintained throughout the lifecycle of the AI use case?

    Ensure you comply with requirements for maintaining reliable records of decisions, testing and the information and data assets used in an AI system. This is important to enable internal and external scrutiny, continuity of knowledge and accountability.

    7.4    Disclosing AI interactions and outputs

    Will people directly interacting with the AI system or relying on its outputs be made aware of the interaction or that they are relying on AI generated output? How?

    Consider members of the public or government officials that may interact with the system or decision makers that may rely on its outputs. 

    7.5    Offer appropriate explanations

    If your AI system will materially influence administrative action or decision making by or about individuals, groups, organisations or communities, will your AI system allow for appropriate explanation of the factors leading to AI generated decisions, recommendations or insights?

  • 8. Contestability

  • Under Australia's AI Ethics Principles, when an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    8.1    Notification of AI affecting rights

    Will individuals, groups, organisations or communities be notified if an administrative action with a legal or similarly significant effect on them was materially influenced by the AI system?

    See guidance document for help interpreting ‘administrative action’, ‘materially influenced’ and ‘legal or similarly significant effect’ as well as recommendations for notification content.

    8.2    Challenging administrative actions influenced by AI

    Is there a timely and accessible process to challenge the administrative actions discussed at 8.1?

    Administrative law is the body of law that regulates government administrative action. Access to review of government administrative action is a key component of access to justice. Consistent with best practice in administrative action, ensure that no person could lose a right, privilege or entitlement without access to a review process or an effective way to challenge an AI generated or informed decision. 

  • 9. Accountability

  • Under Australia's AI Ethics Principles, those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled. 

  • 9.1    Establishing responsibilities 

    Identify who will be responsible for:

    • use of AI insights and decisions
    • monitoring the performance of the AI system
    • data governance.

    Where feasible, it is recommended that the same person does not hold all 3 of these roles. The responsible officers should be appropriately senior, skilled, and qualified. 

    9.2    Training of AI system operators

    For question 9.2, indicate either yes, no or N/A, and explain your answer.

    Is there a process in place to ensure operators of the AI system are sufficiently skilled and trained?

    With all automated systems, there is always the risk of overreliance on results. It is important that the operators of the system, including any person who exercises judgment over the use of insights, or responses to alerts, are appropriately trained on the use of the AI system. Training should be sufficient to understand how to appropriately use the AI system, and to monitor and critically evaluate outcomes.

  • 10. Human-centred values

  • Under Australia's AI Ethics Principles, AI systems should throughout their lifecycle respect human rights, diversity and the autonomy of individuals. 

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    10.1    Incorporating diversity

    Are you satisfied that you have incorporated diversity and people with appropriately diverse skills, experience and backgrounds throughout the lifecycle of your AI use case?

    Consider how you have incorporated diversity of perspective through the lifecycle of your AI use case – for example, through the choice of data, composition of development and deployment teams and the stakeholder and user groups to choose to consult.

    10.2    Human rights obligations

    Have you consulted an appropriate source of legal advice or otherwise ensured that your AI use case and the use of data align with human rights obligations?

    It is recommended you complete this question after completing previous sections of the assessment. This approach will enable a more considered assessment of the human rights implications of your AI use case.

  • 11. Internal review and next steps

    11.1    Legal review of AI use case

    This section must be completed by a qualified legal adviser. Ensure any supporting legal advice is available for the remaining review steps. Repeat this step if there are significant changes.
    The response to this section should include:

    • the statement ‘I am/am not satisfied that the AI use case and the use of data meet legal requirements’
    • comments (optional)
    • name and position of legal adviser
    • date.

    11.2    Risk summary table

    In the table below, list any risks identified in section 3 (the threshold assessment) or subsequently as having a risk severity of ‘medium’ or ‘high’. Also list any instances where you have answered ‘no’ in any of the questions in sections 4 to 10.

    As you proceed through internal review (section 11.3) and, if applicable, external review (section 11.4), list any agreed risk treatments and assess residual risk using the risk matrix in section 3. 

    Risk summary table 
    Risk Risk treatments Residual risk 
    [Example] [Example] [Example] 

     

    11.3    Internal review of AI use case

    An internal agency governance body designated by your agency’s Accountable Authority must review the assessment and the risks outlined in the risk summary table.

    The governance body may decide to accept any ‘medium’ risks, to recommend risk treatments, or decide not to accept the risk and recommend not proceeding with the AI use case.

    List recommendations of your agency governance body below.

    11.4    External review of AI use case

    If, following internal review (section 11.3), there are any residual risks with a ‘high’ risk rating, consider whether the AI use case and this assessment would benefit from external review. 

    If an external review recommends further risk treatments or adjustments to the use case, your agency must consider these recommendations, decide which to implement, and whether to accept any residual risk and proceed with the use case.

    If applicable, list any recommendations arising from external review below and record the agency response to these recommendations.

    The assessment should answer the following questions about the external review.

    • Has your AI use case been subject to external review? Answer yes, no or not applicable. 
    • Who conducted the external review?
    • What date was an external review last completed?
    • What are the external review recommendations? 
    • For each recommendation, what is the agency response? 
  • 2. Purpose and expected benefits

  • 3. Threshold assessment

  • 4. Fairness

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.