| Likelihood | Probability | Description |
| Almost certain | 91% and above | The risk is almost certain to eventuate within the foreseeable future. |
| Likely | 61–90% | The risk will probably eventuate within the foreseeable future. |
| Possible | 31–60% | The risk may eventuate within the foreseeable future. |
| Unlikely | 5–30% | The risk may eventuate at some time but is not likely to occur in the foreseeable future. |
| Rare | Less than 5% | The risk will only eventuate in exceptional circumstances or as a result of a combination of unusual events. |
Complete the information below:
• Name of AI use case.
• Reference number.
• Lead agency.
• Assessment contact officer (name and email).
• Executive sponsor (name and email).
In plain language, briefly explain how you are using or intend to use AI. 200 words or less.
Briefly explain what type of AI technology you are using or intend to use. 100 words or less.
These stages can take place in an iterative manner and are not necessarily sequential. They are adapted from the OECD’s definition of the AI system lifecycle. Refer to guidance for further information. Select only one.
Which of the following lifecycle stages best describes the current stage of your AI use case?
Assessments must be reviewed when use cases either move to a different stage of their lifecycle or significant changes occur to the scope, function or operational context of the use case. Consult the Guidance and, if in doubt, consult the DTA.
Indicate next date/milestone that will trigger the next review of the AI use case.
Record the review history for this assessment. Include the review dates and brief summaries of changes arising from reviews (50 words or less).
Using the risk matrix, determine the severity of each of the risks in the table below, accounting for any risk mitigations and treatments. Provide a rationale and an explanation of relevant risk controls that are planned or in place. The guidance document contains consequence and likelihood descriptors and other information to support the risk assessment.
The risk assessment should reflect the intended scope, function and risk controls of the AI use case. Keep the rationale for each risk rating clear and concise, aiming for no more than 200 words per risk.
| Likelihood/Consequence | Insignificant | Minor | Moderate | Major | Severe |
|---|---|---|---|---|---|
| Almost certain | Medium | Medium | High | High | High |
| Likely | Medium | Medium | Medium | High | High |
| Possible | Low | Medium | Medium | High | High |
| Unlikely | Low | Low | Medium | Medium | High |
| Rare | Low | Low | Low | Medium | Medium |
What is the risk (low, medium or high) of the use of AI:
If the assessment contact officer is satisfied that all risks in the threshold assessment are low, then they may recommend that a full assessment is not needed and that the agency accept the low risk.
If one or more risks are medium or above, then a full assessment must be completed, unless you amend the AI use scope, function or risk controls such that the assessment contact officer is satisfied that all risks in the threshold assessment are low.
You may decide not to accept the risk and not proceed with the AI use case.
The assessment contact officer recommendation should include:
The executive sponsor endorsement should include:
For each of the following questions, indicate either yes, no or N/A, and explain your answer.
Do you have a clear definition of what constitutes a fair outcome in the context of your use of AI?
Where appropriate, you should consult relevant domain experts, affected parties and stakeholders to determine how to contextualise fairness for your use of AI. Consider inclusion and accessibility. Consult the guidance document for prompts and resources to assist you.
Do you have a way of measuring (quantitatively or qualitatively) the fairness of system outcomes?
Measuring fairness is an important step in identifying and mitigating fairness risks. A wide range of metrics are available to address various concepts of fairness. Consult the guidance document for resources to assist you.
For each of the following questions, indicate either yes, no or N/A, and explain your answer.
If your AI system requires the input of data to operate, or you are training or evaluating an AI model, can you explain why the chosen data is suitable for your use case?
Consider data quality and factors such as accuracy, timeliness, completeness, consistency, lineage, provenance and volume.
If your AI system uses Indigenous data, including where any outputs relate to Indigenous people, have you ensured that your AI use case is consistent with the Framework for Governance of Indigenous Data?
Consider whether your use of Indigenous data and AI outputs is consistent with the expectations of Indigenous people, and the Framework for Governance of Indigenous Data (GID). See definition of Indigenous data in guidance material.
If you are procuring an AI model, can you explain its suitability for your use case?
May include multiple models or a class of models. Includes using open-source models, application programming interfaces (APIs) or otherwise sourcing or adapting models. Factors to consider are outlined in guidance.
Outline any areas of concern in results from testing. If testing is yet to occur, outline elements to be considered in testing plan (for example, the model’s accuracy).
Have you conducted, or will you conduct, a pilot of your use case before deploying?
If answering ‘yes’, explain what you have learned or hope to learn in relation to reliability and safety and, if applicable, outline how you adjusted the use of AI.
Have you established a plan to monitor and evaluate the performance of your AI system?
If answering ‘yes’, explain how you will monitor and evaluate performance.
Have you established clear processes for human intervention or safely disengaging the AI system where necessary (for example, if stakeholders raise valid concerns with insights or decisions or an unresolvable issue is identified)?
See guidance document for resources to assist you in establishing appropriate processes.
For each of the following questions, indicate either yes, no or N/A, and explain your answer.
Are you satisfied that any collection, use or disclosure of personal information is necessary, reasonable and proportionate for your AI use case?
See guidance on data minimisation and privacy enhancing technologies.
Has the AI use case undergone a Privacy Threshold Assessment or Privacy Impact Assessment?
Has the AI system been authorised or does it fall within an existing authority to operate in your environment, in accordance with Protective Security Policy Framework (PSPF) Policy 11: Robust ICT systems?
Engage with your agency’s IT Security Adviser and consider the latest security guidance and strategies for AI use (such as Engaging with AI from the Australian Signals Directorate).
For each of the following questions, indicate either yes, no or N/A, and explain your answer.
Have you consulted stakeholders representing all relevant communities or groups that may be significantly affected throughout the lifecycle of the AI use case?
Refer to the list of stakeholders identified in section 2. Seek out community representatives with the appropriate skills, knowledge or experience to engage with AI ethics issues. Consult the guidance document for prompts and resources to assist you.
Will appropriate information (such as the scope and goals) about the use of AI be made publicly available?
See guidance document for advice on appropriate transparency mechanisms, information to include and factors to consider in deciding to publish or not publish AI use information.
Have you ensured that appropriate documentation and records will be maintained throughout the lifecycle of the AI use case?
Ensure you comply with requirements for maintaining reliable records of decisions, testing and the information and data assets used in an AI system. This is important to enable internal and external scrutiny, continuity of knowledge and accountability.
Will people directly interacting with the AI system or relying on its outputs be made aware of the interaction or that they are relying on AI generated output? How?
Consider members of the public or government officials that may interact with the system or decision makers that may rely on its outputs.
If your AI system will materially influence administrative action or decision making by or about individuals, groups, organisations or communities, will your AI system allow for appropriate explanation of the factors leading to AI generated decisions, recommendations or insights?
For each of the following questions, indicate either yes, no or N/A, and explain your answer.
Will individuals, groups, organisations or communities be notified if an administrative action with a legal or similarly significant effect on them was materially influenced by the AI system?
See guidance document for help interpreting ‘administrative action’, ‘materially influenced’ and ‘legal or similarly significant effect’ as well as recommendations for notification content.
Is there a timely and accessible process to challenge the administrative actions discussed at 8.1?
Administrative law is the body of law that regulates government administrative action. Access to review of government administrative action is a key component of access to justice. Consistent with best practice in administrative action, ensure that no person could lose a right, privilege or entitlement without access to a review process or an effective way to challenge an AI generated or informed decision.
Identify who will be responsible for:
Where feasible, it is recommended that the same person does not hold all 3 of these roles. The responsible officers should be appropriately senior, skilled, and qualified.
For question 9.2, indicate either yes, no or N/A, and explain your answer.
Is there a process in place to ensure operators of the AI system are sufficiently skilled and trained?
With all automated systems, there is always the risk of overreliance on results. It is important that the operators of the system, including any person who exercises judgment over the use of insights, or responses to alerts, are appropriately trained on the use of the AI system. Training should be sufficient to understand how to appropriately use the AI system, and to monitor and critically evaluate outcomes.