2. Purpose and expected benefits
2.1 Problem definition
Describe the problem that you are trying to solve.
For example, the problem might be that your agency receives a high volume of public submissions, and that this volume makes it difficult to engage with the detail of issues raised in submissions in a timely manner.
Do not describe how you plan to fix the problem or how AI will be used.
Though ‘problem’ implies a negative framing, the problem may be that your agency is not able to take full advantage of an opportunity to do things in a better or more efficient way.
2.2 AI use case purpose
Clearly and concisely describe the purpose of your use of AI, focusing on how it will address the problem you described at section 2.1.
Your answer may read as a positive restatement of the problem and how it will be addressed.
For example, the purpose may be to enable you to process public submissions more efficiently and effectively and engage with the issues that they raise in more depth.
2.3 Non-AI alternatives
Briefly outline non-AI alternatives that could address the problem you described at section 2.1.
Non‑AI alternatives may have advantages over solutions involving AI. For example, they may be cheaper, safer or more reliable.
Considering these alternatives will help clarify the benefits and drawbacks of using AI and help your agency make a more informed decision about whether to proceed with an AI based solution.
2.4 Identifying stakeholders
Conduct a mapping exercise to identify the individuals or groups who may be affected by the AI use case. Consider holding a workshop or brainstorm with a diverse team to identify the different direct and indirect stakeholders of your AI use case.
The stakeholder mapping aid attached to the impact assessment tool may help generate discussion on the types of stakeholder groups to consider. Please note the table has been provided as a prompt to aid discussion and is not intended as a prescriptive or comprehensive list.
2.5 Expected benefits
This section requires you to explain the expected benefits of the AI use case, considering the stakeholders identified in the previous question. The AI Ethics Principles specify that throughout their lifecycle, AI systems should benefit individuals, society and the environment.
This analysis should be supported by specific metrics or qualitative analysis. Metrics should be quantifiable measures of positive outcomes that can be measured after the AI is deployed to assess the value of using AI. Any qualitative analysis should consider whether there is an expected positive outcome and whether AI is a good fit to accomplish the relevant task, particularly when compared to the non‑AI alternatives you identified previously. Benefits may include gaining new insights or data.
Consider consulting the following resources for further advice: