For each of the following questions, indicate either yes, no or N/A, and explain your answer.
Are you satisfied that any collection, use or disclosure of personal information is necessary, reasonable and proportionate for your AI use case?
See guidance on data minimisation and privacy enhancing technologies.
Has the AI use case undergone a Privacy Threshold Assessment or Privacy Impact Assessment?
Has the AI system been authorised or does it fall within an existing authority to operate in your environment, in accordance with Protective Security Policy Framework (PSPF) Policy 11: Robust ICT systems?
Engage with your agency’s IT Security Adviser and consider the latest security guidance and strategies for AI use (such as Engaging with AI from the Australian Signals Directorate).
For each of the following questions, indicate either yes, no or N/A, and explain your answer.
Have you consulted stakeholders representing all relevant communities or groups that may be significantly affected throughout the lifecycle of the AI use case?
Refer to the list of stakeholders identified in section 2. Seek out community representatives with the appropriate skills, knowledge or experience to engage with AI ethics issues. Consult the guidance document for prompts and resources to assist you.
Will appropriate information (such as the scope and goals) about the use of AI be made publicly available?
See guidance document for advice on appropriate transparency mechanisms, information to include and factors to consider in deciding to publish or not publish AI use information.
Have you ensured that appropriate documentation and records will be maintained throughout the lifecycle of the AI use case?
Ensure you comply with requirements for maintaining reliable records of decisions, testing and the information and data assets used in an AI system. This is important to enable internal and external scrutiny, continuity of knowledge and accountability.
Will people directly interacting with the AI system or relying on its outputs be made aware of the interaction or that they are relying on AI generated output? How?
Consider members of the public or government officials that may interact with the system or decision makers that may rely on its outputs.
If your AI system will materially influence administrative action or decision making by or about individuals, groups, organisations or communities, will your AI system allow for appropriate explanation of the factors leading to AI generated decisions, recommendations or insights?
For each of the following questions, indicate either yes, no or N/A, and explain your answer.
Will individuals, groups, organisations or communities be notified if an administrative action with a legal or similarly significant effect on them was materially influenced by the AI system?
See guidance document for help interpreting ‘administrative action’, ‘materially influenced’ and ‘legal or similarly significant effect’ as well as recommendations for notification content.
Is there a timely and accessible process to challenge the administrative actions discussed at 8.1?
Administrative law is the body of law that regulates government administrative action. Access to review of government administrative action is a key component of access to justice. Consistent with best practice in administrative action, ensure that no person could lose a right, privilege or entitlement without access to a review process or an effective way to challenge an AI generated or informed decision.
Identify who will be responsible for:
Where feasible, it is recommended that the same person does not hold all 3 of these roles. The responsible officers should be appropriately senior, skilled, and qualified.
For question 9.2, indicate either yes, no or N/A, and explain your answer.
Is there a process in place to ensure operators of the AI system are sufficiently skilled and trained?
With all automated systems, there is always the risk of overreliance on results. It is important that the operators of the system, including any person who exercises judgment over the use of insights, or responses to alerts, are appropriately trained on the use of the AI system. Training should be sufficient to understand how to appropriately use the AI system, and to monitor and critically evaluate outcomes.
For each of the following questions, indicate either yes, no or N/A, and explain your answer.
Are you satisfied that you have incorporated diversity and people with appropriately diverse skills, experience and backgrounds throughout the lifecycle of your AI use case?
Consider how you have incorporated diversity of perspective through the lifecycle of your AI use case – for example, through the choice of data, composition of development and deployment teams and the stakeholder and user groups to choose to consult.
Have you consulted an appropriate source of legal advice or otherwise ensured that your AI use case and the use of data align with human rights obligations?
It is recommended you complete this question after completing previous sections of the assessment. This approach will enable a more considered assessment of the human rights implications of your AI use case.
This section must be completed by a qualified legal adviser. Ensure any supporting legal advice is available for the remaining review steps. Repeat this step if there are significant changes.
The response to this section should include:
In the table below, list any risks identified in section 3 (the threshold assessment) or subsequently as having a risk severity of ‘medium’ or ‘high’. Also list any instances where you have answered ‘no’ in any of the questions in sections 4 to 10.
As you proceed through internal review (section 11.3) and, if applicable, external review (section 11.4), list any agreed risk treatments and assess residual risk using the risk matrix in section 3.
| Risk summary table | ||
| Risk | Risk treatments | Residual risk |
| [Example] | [Example] | [Example] |
An internal agency governance body designated by your agency’s Accountable Authority must review the assessment and the risks outlined in the risk summary table.
The governance body may decide to accept any ‘medium’ risks, to recommend risk treatments, or decide not to accept the risk and recommend not proceeding with the AI use case.
List recommendations of your agency governance body below.
If, following internal review (section 11.3), there are any residual risks with a ‘high’ risk rating, consider whether the AI use case and this assessment would benefit from external review.
If an external review recommends further risk treatments or adjustments to the use case, your agency must consider these recommendations, decide which to implement, and whether to accept any residual risk and proceed with the use case.
If applicable, list any recommendations arising from external review below and record the agency response to these recommendations.
The assessment should answer the following questions about the external review.