1. Basic information

1.1 AI use case profile

This section is intended to record basic information about the AI use case.

Name of AI use case

Choose a clear, simple name that accurately conveys the nature of the use case.

Internal reference number or identifier 

Assign a unique reference number or other identifier for your assessment. This is intended to assist with internal record keeping and engagement with the DTA.

Lead agency

The agency with primary responsibility for the AI use case. Where 2 or more agencies are jointly leading, nominate one as the contact point for assessment.

1.2 Establishing impact assessment responsibilities

Assessing officer

An officer assigned to complete the assessment, coordinate the end-to-end process and serve as the contact point for any assessment queries. Depending on the use case and agency context, they may be a technical, data, governance or risk specialist, or a policy or project officer from the business area implementing the AI use case in its operations.

Accountable use case owner(s)

This role is described in the AI policy and the Standard for accountability.

Approving officer

This should be an officer with appropriate authority to approve the AI use case assessment, including the inherent risk ratings. Similar to the assessing officer role above, the approving officer’s specific role in the AI use case will depend on the agency and use case context.

1.3 Additional roles and responsibilities

Clear roles and responsibilities are essential for ensuring accountability in the development and use of AI systems. In this section, you are asked to identify any additional individual officers that may have responsibilities related to your AI use case and the underlying AI system(s). Consider the roles already outlined – such as assessing officers or approving officers – and consider other positions that contribute to the AI system's lifecycle or oversight.

This is not intended to create new requirements for specific roles under the AI policy or this impact assessment. It is intended to help agencies record relevant roles and responsibilities, maintain transparency and facilitate accountability during AI use case implementation. For example, you could identify the person(s) responsible for:

  • the decision to use the AI system and the scope of the AI system
  • designing, developing and maintaining the system, such as key personnel of third-party suppliers
  • applying interpreting the AI system's outputs, including decisions or actions based on those outputs
  • controlling the AI system, with authority to start, stop, or deactivate the system under normal operating conditions
  • monitoring and maintaining performance and safety, meeting quality standards and detecting errors, biases, and unintended consequences
  • disengaging or stopping the system, if immediate intervention is required to prevent or stop harm
  • the governance of the data used for operating, training or validating the AI system

You should consider distributing these roles among multiple officers where feasible, to avoid excessive concentration of responsibilities in a single individual, while ensuring responsible officers are appropriately skilled and senior.

1.4 AI use case description

Briefly explain how you are using or intending to use AI. This should an 'elevator pitch' that gives the reader a clear idea of the kind of AI use intended, without going into unnecessary technical detail, which is captured in your other project documentation. Use simple, clear language, avoiding technical jargon where possible. You may wish to include:

  • a high level description of the problem the AI use case is trying to solve
  • the way AI will be used
  • the outcome it is intended to achieve.

1.5 In-scope use case

Record whether your AI use case is in scope of the Policy for the responsible use of AI in government (the AI policy). Appendix C of the AI policy specifies the criteria to determine if an AI use case is in scope.

At a minimum, an AI use case is in scope of the AI policy if any of the following apply:

  • The use, misuse or failure of AI could lead to more than insignificant harm to individuals, communities, organisations, the environment or the collective rights of cultural groups including First Nations peoples.
  • The use of AI will materially influence administrative decisions that affect individuals, communities, organisations, the environment or the collective rights of cultural groups including First Nations peoples.
  • It is possible the public will directly interact with, or be significantly impacted by, the AI or its outputs without human review.
  • The AI is designed to use personal or sensitive data (as defined by the Privacy Act 1988 (Cth)) or security classified information (as defined by the Australian Government Protective Security Policy Framework).
  • It is deemed an elevated risk AI use case as directed by the DTA.

Agencies may wish to apply the AI policy to AI use cases that do not meet the above criteria. This includes use cases with specific characteristics or factors unique to an agency’s operating environment that may benefit from applying an impact assessment and governance actions.

The AI policy has been designed to exclude incidental and lower risk uses of AI that do not meet the criteria. Incidental uses of AI may include off-the-shelf software with AI features such as grammar checks and internet searches with AI functionality. The AI policy recognises that incidental usage of AI will grow over time and focuses on uses that require additional oversight and governance.

In assessing whether a use case is in scope, agencies should also carefully consider AI use in the following areas:

  • recruitment and other employment-related decision making
  • automated decision making of discretionary decisions
  • administration of justice and democratic processes
  • law enforcement, profiling individuals, and border control
  • health
  • education
  • critical infrastructure.

While use cases in these areas are not automatically high-risk, they are more likely to involve risks that require careful attention through an impact assessment.

For information on how the policy applies when doing early-stage experimentation, refer to Appendix C of the AI policy

If your use case is within scope, record the AI policy criteria that apply using the checklist. If your use case meets multiple criteria, tick each one. If you are unsure, it is best practice to select the criteria that most closely reflect your use case.

The criteria are designed to help identify uses of AI that require additional oversight and governance. This provides a clearer picture of the types of uses across government.

Refer to the AI policy for further detail on mandatory use case governance actions. Consult your accountable use case owner and your agency's AI accountable official for agency-specific guidance on fulfilling the mandatory AI policy actions and any internal agency requirements in addition to the mandatory actions.

Note you can also apply the AI policy and impact assessment tool to use cases that do not meet the criteria. In this case, you can select 'not applicable' for this question.

1.6 Type of AI technology

Briefly explain what type of AI technology you are using or intend to use. For example, supervised or unsupervised learning, computer vision, natural language processing, generative AI. 

This may require a more technical answer than the use case description. Aim to be clear and concise with your answer and use terms that a reasonably informed person with experience in the AI field would understand. 

1.7 Usage pattern

Select the AI system usage pattern or patterns that apply to your use case. For usage pattern definitions, refer to the Classification system for AI use.

1.8 Administrative decisions

Only complete this section if you selected 'Decision-making and administrative action' in assessment section 1.7 and if AI automated decision-making is used for an administrative decision under an Act.

Express legislative authority is generally required to automate decision-making of an administrative decision under an Act. Legal advice should be obtained for any proposed use of AI in this context.

Agencies using automated decision-making should review the Commonwealth Ombudsman's Better Practice Guide on Automated Decision Making.

Agencies should generally consider:

  • any legislation or framework which requires a particular decision-maker such as a minister to make a decision – for example, the minister may not have discharged their duty if they rely on the AI outputs without proper validation
  • an official's duty of care and diligence under the Public Governance, Performance and Accountability Act 2013 (Cth) – for example, an official fails to validate AI outputs
  • administrative law requirements of legality and procedural fairness – for example, how an automated decision can be challenged.

1.9 Domain

Select the AI system domain or domains that apply to your use case. For domain definitions, refer to the Classification system for AI use.

1.10 Expert contributions

List any expert consultation undertaken during the assessment process, including the nature of their expertise, the specific contributions they made, and how their input informed the assessment process. While such consultation is not mandatory, agencies should consider engaging relevant internal or external expertise based on the complexity, novelty or potential impacts of the AI system.

Agencies should:

  • Identify the areas where expert input is most needed, such as ethical considerations, legal risk and compliance, and technological challenges.
  • Engage with a diverse range of experts to ensure a comprehensive assessment.
  • Document the consultation process thoroughly, including the date of consultation, the experts' names and affiliations, and their key recommendations.
  • Summarise how the expert input was integrated into the assessment and any resulting changes to the AI system or its deployment.
  • Review and update the record of expert consultations periodically to ensure it remains relevant and accurate.

1.11 Impact assessment review log

As new information becomes available or design choices are refined, you should reassess all identified risks and consider whether previous responses still reflect the current state of the project. When data sources, functionality, user groups or other project elements change, revise previous answers to maintain clear and accurate records of the risk profile.

The AI policy specifies requirements for monitoring, evaluating and re-validating use cases following deployment. For details, refer to the AI policy.

Next page

2. Purpose and expected benefits

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.