The premise of digital inclusion is that everyone should be able to make full use of digital technologies and the benefits they bring, while avoiding their potential negative consequences' (Source - What is digital inclusion? - Page 3 of Australian Digital Inclusion Index)
OffA digital service refers to the components of an interaction that a user directly engages with when accessing information, completing a task, or seeking assistance through digital means. It includes all user-facing digital touchpoints that facilitate this interaction, such as:
The boundary of a digital service typically covers all use-facing elements involved in the interaction and service delivery, including the service interface, processes handling user inputs, and the immediate responses generated by the system.
Digital services exclude internal backend systems, workflows, administrative processes, and manual interventions that support the service but are not visible to or directly interacted with by the user.
Services that have been fully deployed to a live environment where they are currently being accessed by users. Examples include:
Informational services primarily provide users with access to knowledge, guidance, or content to support decision-making, learning, or understanding, without requiring the user to complete a formal transaction. Examples include:
Services in the design and delivery process with the overall aim to be deployed to a live environment for the first time to their targeted user base. Examples include:
Services that have undergone significant changes to its user experience. Examples include:
Services that have been launched in place of a previous service that was retired. Examples include:
Staff-facing services provide information to government employees or support employee transactions. They may include:
Transactional services enable users to complete a formal interaction or process online that results in an outcome, record, or change in status, often involving submission of information, applicants, or payments, Examples include:
Staff facing services provide information to government employees or support employee transactions. They may include:
For more information on how ‘portals’ are defined, please refer to the Australian Government Architecture (AGA).
OffThe DTA coordinates assurance for the Australian Government’s digital projects.
The Assurance Framework for Digital and ICT Investments sets out how the DTA works to maximise the value of this assurance in keeping delivery teams focused on what must go right to deliver expected benefits for Australians.
The DCA guidance prepared jointly with the John Grill Institute for Project Leadership is part of a research series engaging leading researchers around the world on the key factors influencing digital project success. This research is then translated into how the DTA works across the Australian Government to deliver on its purpose.
You can learn more about the work of the DTA by visiting dta.gov.au.
This document provides guidance for completing the Australian Government artificial intelligence (AI) impact assessment tool (the tool). Use this supporting guidance to understand, interpret and complete the tool.
The impact assessment tool is for Australian Government teams working on an AI use case. It helps teams identify, assess and manage AI use case impacts and risks against Australia's AI Ethics Principles.
Agencies can use the tool to fulfill AI use case impact assessment requirements under the Policy for the responsible use of AI in government (the AI policy). Refer to the impact assessment tool document for assessment instructions and to the AI policy for key definitions and implementation requirements.
This guidance mirrors the AI impact assessment tool’s 12-section structure. For advice on completing a section in the tool, find the corresponding section number in this guidance
Assessing officers should familiarise themselves with the AI policy and the AI Ethics Principles. Also consider other Digital Transformation Agency (DTA) resources designed to support government AI adoption, including:
The DTA welcomes user feedback on the tool and supporting guidance. Please send questions or suggestions to ai@dta.gov.au.
Version 2.0
This section is intended to record basic information about the AI use case.
Choose a clear, simple name that accurately conveys the nature of the use case.
Assign a unique reference number or other identifier for your assessment. This is intended to assist with internal record keeping and engagement with the DTA.
The agency with primary responsibility for the AI use case. Where 2 or more agencies are jointly leading, nominate one as the contact point for assessment.
An officer assigned to complete the assessment, coordinate the end-to-end process and serve as the contact point for any assessment queries. Depending on the use case and agency context, they may be a technical, data, governance or risk specialist, or a policy or project officer from the business area implementing the AI use case in its operations.
This role is described in the AI policy and the Standard for accountability.
This should be an officer with appropriate authority to approve the AI use case assessment, including the inherent risk ratings. Similar to the assessing officer role above, the approving officer’s specific role in the AI use case will depend on the agency and use case context.
Clear roles and responsibilities are essential for ensuring accountability in the development and use of AI systems. In this section, you are asked to identify any additional individual officers that may have responsibilities related to your AI use case and the underlying AI system(s). Consider the roles already outlined – such as assessing officers or approving officers – and consider other positions that contribute to the AI system's lifecycle or oversight.
This is not intended to create new requirements for specific roles under the AI policy or this impact assessment. It is intended to help agencies record relevant roles and responsibilities, maintain transparency and facilitate accountability during AI use case implementation. For example, you could identify the person(s) responsible for:
You should consider distributing these roles among multiple officers where feasible, to avoid excessive concentration of responsibilities in a single individual, while ensuring responsible officers are appropriately skilled and senior.
Briefly explain how you are using or intending to use AI. This should an 'elevator pitch' that gives the reader a clear idea of the kind of AI use intended, without going into unnecessary technical detail, which is captured in your other project documentation. Use simple, clear language, avoiding technical jargon where possible. You may wish to include:
Record whether your AI use case is in scope of the Policy for the responsible use of AI in government (the AI policy). Appendix C of the AI policy specifies the criteria to determine if an AI use case is in scope.
At a minimum, an AI use case is in scope of the AI policy if any of the following apply:
Agencies may wish to apply the AI policy to AI use cases that do not meet the above criteria. This includes use cases with specific characteristics or factors unique to an agency’s operating environment that may benefit from applying an impact assessment and governance actions.
The AI policy has been designed to exclude incidental and lower risk uses of AI that do not meet the criteria. Incidental uses of AI may include off-the-shelf software with AI features such as grammar checks and internet searches with AI functionality. The AI policy recognises that incidental usage of AI will grow over time and focuses on uses that require additional oversight and governance.
In assessing whether a use case is in scope, agencies should also carefully consider AI use in the following areas:
While use cases in these areas are not automatically high-risk, they are more likely to involve risks that require careful attention through an impact assessment.
For information on how the policy applies when doing early-stage experimentation, refer to Appendix C of the AI policy
If your use case is within scope, record the AI policy criteria that apply using the checklist. If your use case meets multiple criteria, tick each one. If you are unsure, it is best practice to select the criteria that most closely reflect your use case.
The criteria are designed to help identify uses of AI that require additional oversight and governance. This provides a clearer picture of the types of uses across government.
Refer to the AI policy for further detail on mandatory use case governance actions. Consult your accountable use case owner and your agency's AI accountable official for agency-specific guidance on fulfilling the mandatory AI policy actions and any internal agency requirements in addition to the mandatory actions.
Note you can also apply the AI policy and impact assessment tool to use cases that do not meet the criteria. In this case, you can select 'not applicable' for this question.
Briefly explain what type of AI technology you are using or intend to use. For example, supervised or unsupervised learning, computer vision, natural language processing, generative AI.
This may require a more technical answer than the use case description. Aim to be clear and concise with your answer and use terms that a reasonably informed person with experience in the AI field would understand.
Select the AI system usage pattern or patterns that apply to your use case. For usage pattern definitions, refer to the Classification system for AI use.
Only complete this section if you selected 'Decision-making and administrative action' in assessment section 1.7 and if AI automated decision-making is used for an administrative decision under an Act.
Express legislative authority is generally required to automate decision-making of an administrative decision under an Act. Legal advice should be obtained for any proposed use of AI in this context.
Agencies using automated decision-making should review the Commonwealth Ombudsman's Better Practice Guide on Automated Decision Making.
Agencies should generally consider:
Select the AI system domain or domains that apply to your use case. For domain definitions, refer to the Classification system for AI use.
List any expert consultation undertaken during the assessment process, including the nature of their expertise, the specific contributions they made, and how their input informed the assessment process. While such consultation is not mandatory, agencies should consider engaging relevant internal or external expertise based on the complexity, novelty or potential impacts of the AI system.
Agencies should:
As new information becomes available or design choices are refined, you should reassess all identified risks and consider whether previous responses still reflect the current state of the project. When data sources, functionality, user groups or other project elements change, revise previous answers to maintain clear and accurate records of the risk profile.
The AI policy specifies requirements for monitoring, evaluating and re-validating use cases following deployment. For details, refer to the AI policy.
Describe the problem that you are trying to solve.
For example, the problem might be that your agency receives a high volume of public submissions, and that this volume makes it difficult to engage with the detail of issues raised in submissions in a timely manner.
Do not describe how you plan to fix the problem or how AI will be used.
Though ‘problem’ implies a negative framing, the problem may be that your agency is not able to take full advantage of an opportunity to do things in a better or more efficient way.
Clearly and concisely describe the purpose of your use of AI, focusing on how it will address the problem you described at section 2.1.
Your answer may read as a positive restatement of the problem and how it will be addressed.
For example, the purpose may be to enable you to process public submissions more efficiently and effectively and engage with the issues that they raise in more depth.
Briefly outline non-AI alternatives that could address the problem you described at section 2.1.
Non‑AI alternatives may have advantages over solutions involving AI. For example, they may be cheaper, safer or more reliable.
Considering these alternatives will help clarify the benefits and drawbacks of using AI and help your agency make a more informed decision about whether to proceed with an AI based solution.
Conduct a mapping exercise to identify the individuals or groups who may be affected by the AI use case. Consider holding a workshop or brainstorm with a diverse team to identify the different direct and indirect stakeholders of your AI use case.
The stakeholder mapping aid attached to the impact assessment tool may help generate discussion on the types of stakeholder groups to consider. Please note the table has been provided as a prompt to aid discussion and is not intended as a prescriptive or comprehensive list.
This section requires you to explain the expected benefits of the AI use case, considering the stakeholders identified in the previous question. The AI Ethics Principles specify that throughout their lifecycle, AI systems should benefit individuals, society and the environment.
This analysis should be supported by specific metrics or qualitative analysis. Metrics should be quantifiable measures of positive outcomes that can be measured after the AI is deployed to assess the value of using AI. Any qualitative analysis should consider whether there is an expected positive outcome and whether AI is a good fit to accomplish the relevant task, particularly when compared to the non‑AI alternatives you identified previously. Benefits may include gaining new insights or data.
Consider consulting the following resources for further advice:
To complete the inherent risk assessment, follow these steps.
Inherent risk: reflects the level of risk that exists before any additional or new controls are applied. This is the risk level under standard operating conditions, assuming only existing baseline or standard controls are in place.
Residual risk: reflects the level of risk that remains after new or additional treatments, controls or safeguards have been implemented.
For each risk category listed in the assessment table, determine the likelihood and consequence of the risk occurring for your AI use case. The likelihood descriptors are provided in Table 1 of the impact assessment tool, and consequence descriptors are in the appendix to this guidance.
The inherent risk assessment should reflect the intended scope and function of the AI use case. In conducting your assessment, you should be clear on:
Use the risk matrix provided in Table 2 of the impact assessment tool to determine the risk rating for each category.
Provide clear and concise explanations for each risk rating.
When completing the inherent risk assessment, keep the following in mind:
In this section, you are required to determine the threshold risk rating for the AI use case based on the ratings selected in the sections above. The highest risk rating identified in any earlier sections must be used as the overall risk rating.
Once completed, if the assessing officer is satisfied all risks are low, they may recommend that a full assessment is not required and that the approving officer accept the low risks and endorse the use case.
If one or more risks are medium or higher, the assessing officer must either:
Once the assessing officer has made their recommendation, the approving officer must:
Fairness is a core principle in the design and use of AI systems, but it is a complex and contextual concept. Australia’s AI Ethics Principles state that AI systems should be inclusive and accessible and should not involve or result in unfair discrimination. However, there are different and sometimes conflicting definitions of fairness and people may disagree on what is fair.
For example, there is a distinction between:
Different approaches to fairness involve different trade-offs and value judgments. The most appropriate fairness approach will depend on the specific context and objectives of your AI use case.
When defining fairness for your AI use case, you should be aware that AI models are typically trained on broad sets of data that may contain bias. Bias can arise in data where it is incomplete, unrepresentative or reflects societal prejudices. AI models may reproduce biases present in the training data, which can lead to misleading or unfair outputs, insights or recommendations. This may disproportionally impact some groups, such as First Nations people, people with disability, LGBTIQ+ communities and culturally and linguistically diverse communities. For example, an AI tool used to screen job applicants might systematically disadvantage people from certain backgrounds if trained on hiring data that reflects past discrimination.
When defining fairness for your AI use case, consider the inclusivity and accessibility of the AI. AI can lead to unfairness if it creates barriers for individuals or groups who wish to access government services. For example, an AI chatbot designed to provide social security information may produce unfair outcomes because it is more difficult for vulnerable or underrepresented groups to access the digital technologies required to access the chatbot.
When defining fairness for your AI use case, it is recommended that you:
You should also ensure that your definition of fairness complies with anti-discrimination laws. In Australia, it is unlawful to discriminate on the basis of a number of protected attributes including age, disability, race, sex, intersex status, gender identity and sexual orientation, in certain areas of public life including education and employment. Australia’s federal anti‑discrimination laws are contained in the following legislation:
Where the AI will produce information or be involved in decision-making, you should also ensure that your definition of fairness reflects the administrative law principle of procedural fairness, which requires that decision-making is transparent and challengeable.
You may be able to use a combination of quantitative and qualitative approaches to measuring fairness. Quantitative fairness metrics can allow you to compare outcomes across different groups and assess this against fairness criteria. Qualitative assessments, such as stakeholder engagement and expert review, can provide additional context and surface issues that metrics alone might miss.
The specific quantitative metrics you use to measure fairness will depend on the definition of fairness you have adopted for your use case. When selecting fairness metrics, you should:
For examples of commonly used fairness metrics, see the Fairness Assessor Metrics Pattern from the CSIRO's Data61 unit.
Consider some of these qualitative approaches, which may be useful to overcome data limitations and to surface issues that metrics may overlook.
Consult affected communities, stakeholders and domain experts to understand their perspectives and identify potential issues
Test your AI system with diverse users and solicit their feedback on the fairness and appropriateness of the system’s outputs. Seek out the perspectives of marginalised groups and groups that may be impacted by the AI system.
Engage experts, such as AI ethicists or accessibility and inclusivity specialists, to review the fairness of your system's outputs and the overall approach to fairness. Identify potential gaps or unintended consequences.
The data used to operate, train and validate your AI system has a significant impact on its performance, fairness and safety. In your answer, explain why the chosen data is suitable for your use case. Some relevant considerations are outlined below.
When choosing between datasets, consider whether the data can be separated by marginalised groups, particularly by Indigenous status identifiers. If the data is Indigenous data, see section 6.2 below, you should refer to the Framework for Governance of Indigenous Data.
Agencies should also refer to the Australian Public Service (APS) Data Ethics Framework for guidance on managing and using data and analytics ethically in government, including where AI is used in analytics. The framework is underpinned by 3 key principles: trust, respect and integrity. It provides advice on implementation across different major use cases and agency operations and encourages agencies to assess potential risks and benefits, consider fairness and inclusivity, and engage with stakeholders where appropriate. Visit the Department of Finance website to access the APS Data Ethics Framework.
Data quality should be assessed prior to use in AI systems. Agencies should select applicable metrics to determine a data set's quality and identify any remediation required before using it for training or validation in AI systems. Relevant metrics to consider include diversity, relevance, accuracy, completeness, timeliness, validity and lack of duplication. One method to ensure good quality data is to set minimum thresholds appropriate to specific use cases, such as through acceptance criteria discussed below at section 6.4. An example of a specific framework for determining data quality in statistical uses is the ABS Data Quality Framework.
Where third party material or data is being used to operate, train or validate an AI system, it is important to protect the rights of intellectual property holders. If the AI may use, modify or otherwise handle material in which intellectual property exists, agencies should confirm that both the following are true:
The AI may otherwise infringe third party intellectual property rights.
Agencies should also confirm that the AI system has safeguards in place to prevent the unauthorised use or disclosure of confidential information.
Where data used to operate, train and validate the AI system includes personal information, agencies should confirm that collection, use and disclosure is in accordance with the Australian Privacy Principles (APPs) under the Privacy Act 1988 (Cth) (see section 7 of this guidance).
The relevance of the data used in training the AI model may influence the output and may not be relevant to the use case and Australian context. Consider whether the model is likely to make accurate or reliable predictions concerning matters relating to Australian subject matter if it has been trained on, for example, US centric data.
You should also consider data provenance, lineage and volume – as outlined below:
Involves keeping records of the data collected, processed and stored by the AI system and creating an audit trail to assign custody and trace accountability for issues. It provides assurance of the chain of custody and its reliability, insofar as origins of the data are documented.
Involves documenting data origins and flows to enable stakeholders to better understand how datasets are constructed and processed. This fosters transparency and trust in AI systems.
Consider the volume of data you need to support the operation, training and validation of your AI system.
Describe how any components of your AI system have used or will use Indigenous data, or where any outputs relate to First Nations individuals, communities or groups.
All Australian Public Service (APS) agencies are required to implement the Framework for Governance of Indigenous Data. This framework adopts the definition of 'Indigenous data' as provided by Maiam nayri Wingara Indigenous Data Sovereignty Collective:
Information or knowledge, in any format or medium, which is about and may affect Indigenous peoples both collectively and individually.
If the data used to operate, train or validate your AI system, or any outputs from your AI system, meet this definition of Indigenous data, refer to the Framework for Governance of Indigenous Data for guidance on applying the framework.
The framework is based on the principles of:
The Framework for Governance of Indigenous Data is also informed by 2 complementary data governance frameworks:
Relevant practices to consider in this context include:
Also consider the use of Indigenous data in the context of the United Nations Declaration on the Rights of Indigenous Peoples and apply the concept of 'free, prior and informed consent' in relation to the use of Indigenous data in AI systems.
If you are procuring an AI model or system from a third-party provider, your procurement process should consider if the provider has appropriate data management including data quality and data provenance in relation to the model. This will help you to identify whether the AI model is fit for the context and purpose of your AI use case.
This may include:
There are many other considerations you should take into account when selecting a procured AI model and contracting with a supplier. The following considerations may be relevant to your use case:
Consider also how your agency will support transparency across the AI supply chain, for example, by notifying the developer of issues encountered in using the model or system. Refer to the DTA's AI procurement resources including the:
Testing is a key element for assuring the responsible and safe use of AI models – for both models developed in-house and externally procured – and in turn, of AI systems. Rigorous testing helps validate that the system performs as intended across diverse scenarios. Thorough and effective testing helps identify problems before deployment.
Testing AI systems against test datasets can reveal biases or possible unintended consequences or issues before real-world deployment. Testing on data that is limited or skewed can fail to reveal shortcomings.
Consider establishing clear and measurable acceptance criteria for the AI system that, if met, would be expected to control harms that are relevant in the context of your AI use case. Acceptance criteria should be specific, objective and verifiable. They are meant to specify the conditions under which a potential harm is adequately controlled.
Consider developing a test plan for the acceptance criteria to outline the proposed testing methods, tools and metrics. Documenting results through a test report will assist with demonstrating accountability and transparency. A test report could include the following:
In your explanation, outline any areas of concern in results from testing. If the AI system has not yet undergone testing, outline elements to be considered in testing plans.
As an example. model accuracy is a key metric for evaluating the performance of an AI system. Accuracy should be considered in the specific context of the AI use case, as the consequences of errors or inaccuracies can vary significantly depending on the domain and application. This can include:
Some of the factors that can influence AI model output accuracy and reliability include:
Ways to assess and validate the accuracy of your model for your AI use case include:
It is important to set accuracy targets that are appropriate for the risk and context of the use case. For high stakes decisions, you should aim for a very high level of accuracy and have clear processes for handling uncertain or borderline cases.
Conducting a pilot study is a valuable way to assess the real-world performance and impact of your AI use before full deployment. A well-designed pilot can surface issues related to reliability, safety, fairness and usability that may not be apparent in a controlled development environment.
If you are planning a pilot, your explanation should provide a brief overview of the pilot's:
If you have already completed a pilot, reflect on the key findings and lessons learned, including by:
If you are not planning to conduct a pilot, explain why not. Consider whether the scale, risk or novelty of your use case warrants a pilot phase. Discuss alternative approaches you are taking to validate the performance of your AI use case and gather user feedback prior to full deployment.
Monitoring is key to maintaining the reliability and safety of AI systems over time. It enables active rather than passive oversight and governance, and ensures the agency has ongoing accountability for the AI-assisted performance and decision-making processes.
Your monitoring plan should be tailored to the specific risks and requirements of your use case. In your explanation, describe your approach to monitoring any measurable acceptance criteria (as discussed above at section 6.4) and other relevant metrics such as performance metrics or anomaly detection. In your plan, include your proposed monitoring intervals for your use case. The AI policy requires agencies to establish a clear process to address AI incidents aligned to their ICT management approach. Incident remediation must be overseen by an appropriate governance body or senior executive and should be undertaken in line with any other legal obligations.
Periodically evaluate your monitoring and evaluation mechanisms to ensure they remain effective and aligned with evolving conditions throughout the lifecycle of your AI use case. Examples of events that could influence your monitoring plan are system upgrades, error reports, changes in input data, performance deviation or feedback from stakeholders.
Monitoring can help identify issues that can impact the safety and reliability of your AI system, such as:
Vendors offer monitoring tools that may be worth considering for your use case. For more information on continuous monitoring, refer to the NAIC's Implementing Australia's AI Ethics Principles report.
Relevant stakeholders, including those who operate, use or interact with the AI system, those who monitor AI system performance, and affected stakeholders identified at section 2.4, should have the ability to raise concerns about insights or decisions assisted by the AI system.
Agencies must develop clear pathways for staff or other relevant stakeholders to report AI safety concerns, including AI incidents. Agencies should also document and take appropriate steps in relation to any interventions that occur to ensure consistency and fairness.
In addition, agencies should be prepared to quickly and safely disengage an AI system when an unresolvable issue is identified. This could include a data breach, unauthorised access or system compromise. Consider such scenarios in business continuity, data breach and security response plans.
Agencies should consider the following techniques to avoid overreliance on AI system outputs.
Three techniques to consider at the system design stage:
At the evaluation stage, focus on validating whether the system supports human judgement as intended. Engage directly with users to understand their experience, encourage them to assess outputs critically and suggest improvements. Review user behaviour, feedback loops and decision-making patterns and prompts to confirm that safeguards against overreliance are effective. Use these insights to refine system design, guidance and training materials.
AI system operators play a crucial role in ensuring the responsible and effective use of AI. They must have the necessary skills, knowledge and judgment to understand the system's capabilities and limitations, how to appropriately use the system, interpret its outputs and make informed decisions based on those outputs.
In your answer, describe the process for ensuring AI system operators are adequately trained and skilled. This may include:
Consider what training operators receive before being allowed to use the AI system. Does this training cover technical aspects of the system, as well as ethical and legal considerations?
As a baseline, you may expect that operators:
This includes processes for continuous learning and skill development, and for keeping officers up to date with changes or updates to the AI system.
This can include skills and knowledge assessment, certification or qualification requirements for operators.
Ensure resources and support are available to operators if they have questions or encounter issue. Consider whether this needs to be tailored to the specific needs and risks of your AI system or proposed use case or whether general AI training requirements are sufficient.
Agencies should consider how the AI use case will comply with the Australian Privacy Principles (APPs) in Schedule 1 to the Privacy Act 1988 (Cth). The APPs apply to personal information inputted into an AI system, as well as the output generated or inferred by an AI system that contains personal information. Under the APPs:
For more information, refer to the APP guidelines and the Office of the Australian Information Commissioner (OAIC) Guidance on privacy and the use of commercially available AI products. Also consider your agency's internal privacy policy and resources and consult your agency's privacy officer.
Your agency may want or need to use privacy enhancing technologies to assist in de identifying personal information under the APPs or as a risk mitigation/trust building approach. Where the risk of re-identification is very low, de identified information will no longer comprise personal information and agencies can use the information in ways that the Privacy Act would normally restrict.
Consider the Office of the Australian Information Commissioner's (OAIC) guidance on De identification and the Privacy Act. The OAIC has also jointly developed a resource with CSIRO Data61 on De-identification Decision-Making Framework.
The Australian Government Agencies Privacy Code (the Privacy Code) requires Australian Government agencies subject to the Privacy Act 1988 to conduct a privacy impact assessment (PIA) for all 'high privacy risk projects'. A project may be a high privacy risk if the agency reasonably considers that the project involves new or changed ways of handling personal information that are likely to have a significant impact on the privacy of individuals.
To determine whether a PIA is required, you should complete a privacy threshold assessment (PTA). A PTA will help you identify your use case's potential privacy impacts and screen for factors that point to a 'high privacy risk project' requiring a PIA under the Code.
Agencies should conduct a PTA and, if required, a PIA at an early stage of AI use case development or procurement – for example, after identifying the minimum viable product. This will enable the agency to fully consider whether to proceed with the AI use case or to change the approach if the PIA identifies significant negative privacy impacts. It may be appropriate to conduct a PTA and, if required, a PIA earlier than your AI impact assessment using this tool.
If you have not completed a PTA or PIA, explain how you considered potential privacy impacts – for example, if you have determined the AI use case will not involve personal information. Privacy assessments should consider if relevant individuals have provided informed consent, where required, to the collection, use and disclosure of their personal information in the AI system's training or operation, or as an output for making inferences. Also consider any consent obtained has been recorded, including a description of processes used to obtain the consent.
For more information, refer to the Office of the Australian Information Commissioner (OAIC) advice for Australian Government agencies on when to conduct a privacy impact assessment. You can also consult your agency's privacy officer and internal privacy policy and resources.
If your AI system has used or will use Indigenous data, you should also consider whether principles of collective or group privacy of First Nations people are relevant and refer to the Framework for Governance of Indigenous Data (see section 6.2 of this guidance).
Agencies should consider the digital and cyber security risks associated with operation of the AI. Agencies may wish to refer to the frameworks and guidance noted below in considering what measures the AI will have in place to address security risks.
The Protective Security Policy Framework (PSPF) applies to non corporate Commonwealth entities subject to the Public Governance, Performance and Accountability Act 2013 (PGPA Act). Agencies should refer to the PSPF to understand security requirements relevant to AI technologies. These include managing procurement risks, incorporating and enforcing security terms in contracts, addressing FOCI risks, protecting classified information, and ensuring systems are authorised in accordance with the Information Security Manual (ISM).
You should engage with your agency's ITSA early in the AI use case development and assessment process to ensure it meets all PSPF and ISM requirements.
Agencies should implement security measures to align with Australian Signals Directorate (ASD) guidance on AI data security. This outlines data security risks in the development, testing and deployment of AI, and sets out best practices for securing AI data across stages of the AI lifecycle to address these risks.
Agencies should ensure appropriate procedures are in place to address a data breach or security incident. This may include processes to mitigate the immediate consequences of a data breach or security incident and to ensure any actual or potential ongoing loss to the agency is minimised.
For further mitigation considerations for organisations to consider refer to ASD's guidance on Engaging with AI. It is highly recommended that your agency engages with and implements the mitigation considerations in the guidance. This includes:
Agencies should also consider the requirements outlined in the Department of Home Affairs PSPF Policy Advisory on OFFICIAL Information Use with Generative AI. These include only providing access to certain generative AI products that meet hosting and other security criteria and ensuring staff have relevant training.