You should consult with a diverse range of internal and external stakeholders at every stage of your AI use case development and deployment to help identify potential biases, privacy concerns, and other ethical and legal issues present in your AI use case. This process can also help foster transparency, accountability, and trust with your stakeholders and can help improve their understanding of the technology's benefits and limitations. Refer to the stakeholders you identified in section 2.4.
If your project has the potential to significantly impact First Nations individuals, communities or groups, it is critical that you meaningfully consult with relevant community representatives.
APS Framework for Engagement and Participation: sets principles and standards that underpin effective APS engagement with citizens, community and business and includes practical guidance on engagement methods.
Best practice consultation guidance note: this resource from the Office of Impact Analysis details the Australian Government consultation principles outlined in the Guide to Policy Impact Analysis.
Principles for engagement in projects concerning Aboriginal and Torres Strait Islander peoples: this resource from the Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS) provides non-Indigenous policy makers and service designers with the foundational principles for meaningfully engaging with Aboriginal and Torres Strait Islander peoples on projects that impact their communities.
Where appropriate, you should consider options to make the scope and goals of your AI use case publicly available. For instance, consider including this information on the relevant program page on your agency website or through other official communications. This information could include:
All agencies in scope of the AI policy are required to publish an AI transparency statement. Your agency's AI accountable official is responsible for ensuring your agency's transparency statement complies with the AI policy. More information on this requirement is contained in the AI policy and associated Standard for transparency statements. Consult your agency's AI accountable official for specific advice on your use case.
Furthermore, to comply with APP 1 and APP 5, agencies should consider updating their privacy policies with information about their use of AI. For example, to advise that personal information may be disclosed to AI system developers or owners.
In some circumstances it may not be appropriate to publish detailed information about your AI use case. When deciding whether to publish this information you should balance the public benefits of AI transparency with the potential risks as well as compatibility with any legal requirements around publication.
For example, you may choose to limit the information you publish, or not publish any information at all, if the use case is still in the experimentation phase, or if publishing may:
Agencies should comply with legislation, policies and standards for maintaining reliable and auditable records of decisions, testing, and the information and data assets used in an AI system. This will enable internal and external scrutiny, continuity of knowledge and accountability. For example, when responding to information requests under the Freedom of Information Act 1982 (Cth). This will also support transparency across the AI supply chain. For example, this documentation may be useful to any downstream users of AI models or systems developed by your agency.
Agencies should document AI technologies they are using to perform government functions as well as essential information about AI models, their versions, creators and owners. In addition, artefacts used and produced by AI – such as prompts, inputs and raw outputs – may constitute Commonwealth records under the Archives Act 1983 and may need to be kept for certain periods of time identified in records authorities issued by the National Archives of Australia (NAA). Such Commonwealth records must not be destroyed, disposed of, transferred, damaged or altered except in limited circumstances listed in the Archives Act.
To identify their legal obligations, business areas implementing AI in agencies may want to consult with their information and records management teams. The NAA can also provide advice on how to manage data and records produced by different AI use cases.
Refer to NAA advice on:
Where suitable, you should consider creating the following forms of documentation for any AI system you build. If you are procuring an AI system from an external provider, it may be appropriate to request these documents as part of your tender process.
A system factsheet (sometimes called a model card) is a short document designed to provide an overview of an AI system to non-technical audiences (such as users, members of the public, procurers, and auditors). These factsheets usually include information about the AI system's purpose, intended use, limitations, training data, and performance against key metrics.
Datasheets are documents completed by dataset creators to provide an overview of the data used to train and evaluate an AI system. Datasheets provide key information about the dataset including its contents, data owners, composition, intended uses, sensitivities, provenance, labelling and representativeness.
System decision registries record key decisions made during the development and deployment of an AI system. These registries contain information about what decisions were made, when they were made, who made them and why they were made (the decision rationale).
It is also best practice to maintain documentation on testing, piloting and monitoring and evaluation of your AI system and use case, in line with the practices outlined in section 6.
For more on AI documentation, see Implementing Australia's AI Ethics Principles.
You should design your use case to inform people that they are interacting with an AI system or are being exposed to content that has been generated by AI. This includes disclosing AI interactions and outputs to internal agency staff and decision-makers, as well as external parties such as members of the public engaging with government.
You should ensure that you disclose when a user is directly interacting with an AI system, especially:
You should ensure that you disclose when someone is being exposed to AI-generated content including where:
Exercise judgment and consider the level of disclosure that the intended audience would expect, including where AI-generated content has been through rigorous fact-checking and editorial review. Err on the side of greater disclosure – norms around appropriate disclosure will continue to develop as AI-generated content becomes more ubiquitous.
When designing or procuring an AI system, you should consider the most appropriate mechanism(s) for disclosing AI interactions. Some examples are outlined below:
Verbal or written disclosures are statements that are heard by or shown to users to inform that they are interacting with (or will be interacting with) an AI system.
For example, disclaimers/warnings, specific clauses in privacy policy and/or terms of use, content labels, visible watermarks, by-lines, physical signage, communication campaigns.
Behavioural disclosure refers to the use of stylistic indicators that help users to identify that they are engaging with AI-generated content. These indicators should generally be used in combination with other forms of disclosure.
For example, using clearly synthetic voices or formal, structured language, robotic avatars.
Technical disclosures are machine-readable identifiers for AI generated content.
For example, inclusion in metadata, technical watermarks, cryptographic signatures.
Agencies should consider using AI systems that use industry-standard provenance technologies, such as those aligned with the standard developed by the Coalition for Content Provenance and Authenticity (C2PA).
In certain contexts, it may be best practice not to provide a non-AI alternative, particularly where the AI system is low-risk, improves service delivery without affecting rights or entitlements, and where alternate pathways would create unnecessary cost, complexity, or delay. However, in other situations, offering the ability to request a non-AI alternative can be important.
Explainability refers to accurately and effectively conveying an AI system's decision process to a stakeholder, even if they don't fully understand the specifics of how the model works. Explainability facilitates procedural fairness, transparency, independent expert scrutiny and access to justice by ensuring that agencies have the material that is required to provide affected individuals with evidence that forms the basis of a decision when needed. To interpret the AI's output and offer an explanation to relevant stakeholders, you should consider whether the agency can access:
You should be able to clearly explain how a government decision or outcome has been made or informed by AI to a range of technical and non-technical audiences. You should also be aware of any requirements in legislation to provide reasons for decisions, both generally and in relation to the particular class of decisions that you are seeking to make using AI.
Explanations may apply globally (how a model broadly works) or locally (why the model has come to a specific decision). You should determine which is more appropriate for your audience.
Outline why the AI system output one outcome instead of another outcome.
Focus on the most-relevant factors contributing to the AI system's decision process.
Align with the audience's level of technical (or non-technical) background.
Generalise to similar cases to help the audience predict what the AI system will do.
Providing explanations is relatively straightforward for interpretable models with low complexity and clear parameters. However, in practice, most AI systems have low interpretability and require effective post-hoc explanations that balance accuracy and simplicity. Among other matters, you should also consider defining appropriate timeframes for providing explanations in the context of your use case.
When developing explanations, consider the range of available approaches based on your model type and use case.
Advice on appropriate explanations is available in the National AI Centre's Implementing Australia's AI Ethics Principles report.
Other reputable resources for explainability tools include open-source libraries maintained by academic institutions and research communities and documentation from major cloud platform providers. When selecting tools, prioritise those with active maintenance, clear documentation, and validation through published research.
However, explainable AI algorithms are not the only way to improve system explainability. Human-centred design can also play an important part, including:
You should notify individuals, groups, communities or businesses when an administrative action materially influenced by an AI system has a legal or significant effect on them. This promotes transparency and access to justice, by ensuring individuals can understand how government uses AI to perform actions that affect them and have the opportunity to seek review of that decision.
This notification should state that the action was materially influenced by an AI system and include information on available review rights and how the individual can challenge the action. The notification should be clear, up-to-date, concise and understandable, and should not be complex, lengthy, legalistic or vague. It may be appropriate to provide notification prior to the action being taken or at the same time that the action occurs (for example, an applicant may be asked to acknowledge that AI will be used to a stated extent to assess their application).
An action producing a 'legal effect' is when an individual, group, community or business's legal status or rights are affected, and includes an effect on the:
An action producing a 'significant effect' is when an individual, group, community or business's circumstances, behaviours, interests or choices are affected, and includes an effect on the provision of:
An action may be considered to have been 'materially influenced' by an AI system if:
'Administrative action' is any of the following:
This guidance is designed to supplement, not replace, existing administrative law requirements pertaining to notification of administrative decisions. The Attorney-General's Department is leading work to develop a consistent legislative framework for automated decision-making (ADM), as part of the government's response to recommendation 17.1 of the Robodebt Royal Commission Report.
Individuals, groups, communities or businesses should be provided with a timely opportunity to challenge an administrative action that has a legal or significant effect on them when the action was materially influenced by an AI system. This is an important administrative law principle. It also promotes accountability and improves the quality and consistency of government decisions.
Administrative actions may be subject to both merits review and judicial review.
Diversity of perspective promotes inclusivity, mitigates biases, supports critical thinking, mitigates the risk of non-compliance with anti-discrimination laws and should be incorporated in all AI system lifecycle stages.
AI systems require input from stakeholders from a variety of backgrounds, including different ethnicities, genders, ages, abilities and socio-economic statuses. This also includes people with diverse professional backgrounds, such as ethicists, social scientists and domain experts relevant to the AI application. Determining which stakeholders and user groups to consult, which data to use, and the optimal team composition will depend on your AI system.
Failing to adequately incorporate diversity into relevant AI lifecycle stages can have unintended negative consequences, as illustrated in a number of real-world examples:
Resources, including approaches, templates and methods to ensure sufficient diversity and inclusion of your AI system, are described in the NAIC's Implementing Australia's AI Ethics Principles report.
You should consult an appropriate source of advice or otherwise ensure that your AI use case and use of data align with human rights obligations. If you have not done so, explain your reasoning.
It is recommended that you complete this question after you have completed the previous sections of the assessment. This will provide more complete information to enable an assessment of the human rights implications of your AI use case.
In Australia, it is unlawful to discriminate on the basis of a number of protected attributes including age, disability, race, sex, intersex status, gender identity and sexual orientation, in certain areas of public life including education and employment. Australia's federal anti discrimination laws are contained in the following legislation.
Human rights are defined in the Human Rights (Parliamentary Scrutiny) Act 2011 as the rights and freedoms contained in the 7 core international human rights treaties to which Australia is a party, namely the:
In addition to other rights referred to in this guidance, human rights you may consider as part of your assessment of the AI use case include:
Agencies should consider putting mechanisms in place during the life cycle of the AI system to ensure that the agency itself, or the relevant decision-maker, remains responsible and accountable for a government decision which involves the use of AI. Such mechanisms should clearly define how ultimate responsibility for the decision is retained, even when AI is used to analyse data or generate recommended outcomes.
Accountability should be considered at all stages of the AI system lifecycle. Some of the relevant considerations for different stages are outlined below.
This question looks to confirm that you have identified and documented any agency specific legislation, regulations, or binding policy instruments that are relevant to your AI use case.
When completing this section:
This section asks whether your agency has sought or obtained legal advice in relation to the AI use case. If you answer 'yes', you should summarise the nature of the legal issue without including the content of the advice. This information should not be disclosed to anyone other than those who need to know or access the information within the agency.
Note that including the actual content of legal advice in this tool may result in waiver of legal professional privilege, meaning the advice could be legally required to be disclosed to others. To avoid unintended waiver, only summarise the subject matter of the advice (for example, 'privacy compliance' or 'intellectual property risks') rather than reproducing or paraphrasing the advice itself.
To complete the risk summary table:
To complete this section, choose an overall residual risk rating for the AI use case. Refer to your response to section 12.3.
If your use case's inherent risk is rated as high at section 3, you are required under the AI policy to apply specific actions, including creating or reusing a governance body for the purpose of governing high-risk AI. You may document the outcome of the governance body review here, including any recommendations and next steps.
This table is designed to help you select the appropriate consequence level for the risk questions in sections 3.1 to 3.8. Examples are illustrative, not exhaustive.
| Risk | Insignificant | Minor | Moderate | Major | Severe |
|---|---|---|---|---|---|
| Negatively affecting public accessibility or inclusivity of government services | Insignificant compromises to accessibility or inclusivity of services. Minor technical issues causing brief inconvenience but no actual barriers to access or inclusion. Issues rapidly resolved with minimal impact on user experience. | Limited, reversable compromises to accessibility or inclusivity of services. Some people experience difficulties accessing services due to technical issues or design oversights. Barriers are short-term and addressed once identified, with additional support provided to people affected. | Many compromises are made to the accessibility or inclusivity of services. Considerable access challenges for a modest number of users. Resolving access issues requires substantial effort and resources. Certain groups may be disproportionately impacted. Affected users experience frustration and delays in receiving services. | Extensive compromises are made to the accessibility or inclusivity of services, may include some essential services. Ongoing delays that require external technical assistance to resolve. Widespread inconvenience, frustration, public distress and potential legal implications. Vulnerable user groups disproportionately impacted. | Widespread irreversible ongoing compromises are made to the accessibility or inclusivity of services, including some essential services. Majority of users, especially vulnerable groups affected. Essential services inaccessible for extended periods, causing significant public distress, legal implications, and a loss of trust in government efficiency. Comprehensive and immediate actions are urgently needed to rectify the situation. |
| Unfair discrimination against individuals, communities or groups | Negligible instances of discrimination, with virtually no discernible effect on individuals, communities, or groups. Issues are proactively identified and rapidly addressed before causing harm. | Limited instances of unfair discrimination occur, affecting a small number of individuals. Relatively isolated cases, and corrective measures minimise their impact. | Moderate levels of discrimination leading to noticeable harm to certain individuals, communities, or groups. These incidents raise bias and fairness concerns and require targeted interventions. | Significant discrimination results in major, tangible harm to individuals and multiple communities or groups. Rebuilding trust requires substantial reforms and remediation efforts. | Pervasive and systemic discrimination causes severe harm across a broad spectrum of the population, particularly marginalised and vulnerable groups. Public outrage, potential legal action, and a profound loss of trust in government. Immediate, sweeping reforms and accountability measures are required. |
| Perpetuating stereotyping or demeaning representations of individuals, communities or groups | Inadvertently reinforce mild stereotypes, but these instances are quickly identified and rectified with no lasting harm or public concern. | Isolated cases of stereotyping, affecting limited members of community with some noticing and raising concerns. Prompt action mitigates the issue, preventing broader impact. | Moderate stereotyping by AI systems leads to noticeable public discomfort and criticism. Disproportionally affecting certain communities or groups. Requires targeted corrective measures to address and prevent recurrence. | Significant and widespread reinforcement of harmful stereotypes and demeaning representations. Causes public outcry and damages the relationship between communities and government entities. Urgent, comprehensive strategies are needed to rectify these representations and restore trust. | Pervasive and damaging stereotyping severely harms multiple communities, leading to widespread distress. Potential legal consequences, and a profound breach of trust in government use of technology. Requires immediate, sweeping actions to address the harm, including system overhauls and public apologies. |
| Harm to individuals, communities, groups, businesses or the environment | Inconsequential glitches with no real harm to the public, business operations or ecosystems. Easily managed through routine measures. | Isolated incidents mildly affecting the public. Slight inconveniences or disruptions to businesses, leading to manageable financial costs. Limited manageable environmental disturbances affecting local ecosystems or resource consumption. | Noticeable negative effects on the public. Businesses face operational challenges or financial losses, affecting their competitiveness. Obvious environmental degradation, including pollution or habitat disruption, prompting public concern. | Significant public harm causing distress and potentially lasting damage. Significant harm to a wide range of businesses, resulting in substantial financial losses, layoffs, and long-term reputational damage. Compromises ecosystem wellbeing causing substantial pollution, loss of biodiversity, and resource depletion. | Widespread, profound harm and severe distress affecting broad segments of the public. Profound damage across the business sector, leading to bankruptcies, major job losses, and a lasting negative impact on the economy. Comprehensive environmental destruction, leading to critical loss of biodiversity, irreversible ecosystem damage, and severe resource scarcity. |
| Compromising privacy due to the sensitivity, amount or source of the data being used by an AI system | Insignificant data handling errors occur without compromising sensitive information. Incidents are quickly rectified, maintaining public trust in data security. | Isolated exposure of limited sensitive data affects a small group of individuals. Swift actions taken to secure the data and prevent further incidents. | Breach of moderate amounts of sensitive data, leading to privacy concerns among the affected populace. Some individuals experience inconvenience and distress. | Serious misuse of sensitive private data affects a large segment of the population, leading to widespread privacy violations and a loss of public trust. Comprehensive measures are urgently required to secure data and address the privacy breaches. | Significant potential to expose sensitive information of a vast number of individuals, causing severe harm, identity-theft risks; use of sensitive personal information in a way that is likely to draw public criticism with limited ability for individuals to choose how their information is used. Significant potential to harm trust in government information handling with potential for lasting consequences. |
| Raising security concerns due to the sensitivity or classification of the data being used by an AI system | Inconsequential security lapses occur without actual misuse of sensitive data. Quickly identified and corrected with no real harm done. These types of incidents may serve as prompts for reviewing security protocols. | A limited security breach involves unauthorised access to protected data affecting a small number of records with minimal impact. Immediate actions secure the breach, and affected individuals are notified and supported. Incident is catalyst for review of security protocols. | Security incident leads to the compromise of a moderate volume of sensitive data, raising concerns over data protection and privacy. The breach necessitates a thorough investigation, enhanced security measures. | A significant security breach results in extensive unauthorised access to sensitive or protected data, causing considerable concern and distress among the public. Urgent security upgrades and support measures for impacted individuals are implemented. to restore security and trust. | A massive security breach exposes a vast amount of sensitive and protected data, leading to severe implications for national security, public safety, and individual privacy. This incident triggers an emergency response, including legal actions, a major overhaul of security systems, and long-term support for those affected. |
| Raising security concerns due to implementation, sourcing or characteristics of the AI system | Inconsequential security concerns arise due to characteristics of the AI system, such as software bugs, which are promptly identified and fixed with no adverse effects on overall security. These issues may serve as lessons, leading to slight improvements in the system's security framework. | Certain characteristics of the AI system lead to vulnerabilities that are exploited in a limited manner, causing minor security breaches. Immediate remediation measures are taken, and the system is updated to prevent similar issues. | A moderate security risk is realised when intrinsic features of the AI system allow for unintended access or data leaks. Incident affects a noticeable but contained component of the AI system. Prompts a comprehensive security review of the AI system and the implementation of more robust safeguards. | Significant security flaws in the AI system's design result in major breaches, compromising a large amount of data and severely affecting system integrity. Incident leads to an urgent overhaul of security measures and protocols, alongside efforts to mitigate the damage. | Critical security vulnerabilities inherent to the AI system lead to widespread breaches, exposing vast quantities of sensitive data and jeopardising national security or public safety. The incident results in severe consequences, necessitating emergency responses, extensive system redesigns, and long-term efforts to recover from the breach and prevent recurrence. |
| Posing a reputational risk or undermining public confidence in the government | Isolated reputational issues arise, quickly addressed and explained. Causes negligible damage to public trust in government capabilities. | Small-scale AI mishaps lead to brief public concern, slightly denting the government's reputation. Prompt clarification and corrective measures minimize long-term impact on public confidence Seen by the government as poor management. | Misapplications result in moderate public dissatisfaction and questioning of government oversight. Requires remedial actions to mend trust and address concerns. Seen by government and opposition as failed management. | Widespread public scepticism and criticism, majorly affecting the government's image. Requires substantial efforts to rebuild public confidence through transparency, accountability, and improvement of AI governance. High profile negative stories, seen by government and opposition as significant failed management. | Severe misuse or failure of AI systems leads to profound public distrust and criticism. Significantly undermining confidence in government effectiveness and integrity. Requires comprehensive, long-term strategies for rehabilitation of public trust, including systemic changes and ongoing engagement. Seen by government and opposition as catastrophic failure of management. Minister expresses loss of confidence or trust in agency. |
Version 2.0
This version of the policy (v2.0) is effective 15 December 2025. The first version (v1.1) took effect on 1 September 2024.
It applies to all non-corporate Commonwealth entities, with some exceptions.
Departments and agencies must meet the mandatory requirements for:
Version 2.0
This policy aims to ensure that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations.
This policy aims to provide a unified approach to enable government to accelerate AI adoption and embrace the AI opportunity. It is designed to reduce barriers to government adoption by helping agencies confidently approach AI governance and implementation.
It aims to ensure agencies have the right settings in place to take advantage of the opportunities presented by AI and fully realise benefits such as increased and improved efficiency, accuracy and service delivery.
This policy aims to strengthen public trust in government adoption of AI by positioning the Australian Government as an exemplar in safe and responsible AI use.
It is designed to enable the responsible use of AI across government, through setting consistent requirements for transparency and accountability, and by requiring risk-based oversight of AI use cases.
This policy aims to embed a forward-leaning, adaptive approach for government’s use of AI that evolves as the technological and policy environment changes.
It supports agencies at different stages of their AI adoption journey and sets requirements that scale with the agency’s use of AI.
This version of the policy (v2.0) is effective 15 December 2025. It replaces version v1.1 of the policy which came into effect 1 September 2024.
All non-corporate Commonwealth entities (NCEs), as defined by the Public Governance, Performance and Accountability Act 2013, must apply this policy.
Corporate Commonwealth entities are also encouraged to apply this policy.
This policy does not apply to:
The NIC includes:
Defence and members of the NIC may voluntarily adopt elements of this policy where they are able to do so without compromising national security capabilities or interests.
The challenges raised by government use of AI are complex and inherently linked with other considerations, such as the APS Code of Conduct, data governance, cyber security, privacy and ethics practices.
This policy has been designed to complement and strengthen – not duplicate – existing frameworks, legislation and practices that touch on government’s use of AI.
This policy must be read and applied alongside existing frameworks and laws to ensure agencies meet all their obligations.
Version 2.0
Agencies must make a publicly available statement outlining their approach to AI adoption and use, as prescribed under the Standard for transparency statements.
The statement must be reviewed and updated annually or sooner, should the agency make significant changes to its approach to AI.
Agencies must notify the DTA when they publish and make any changes to their AI transparency statement by emailing ai@dta.gov.au.
Agencies must develop a strategic position on AI adoption within 6 months of this policy taking effect. This position is to emphasise how AI opportunities can be identified and embraced by the agency.
Agencies must communicate their strategic position on AI to give staff clear direction on AI adoption. In line with their current and anticipated use of AI, agencies can develop a standalone AI strategy, augment an existing strategy or create other materials to communicate the approach to staff.
Agencies must designate accountable official(s) to take accountability for implementing this policy.
Agencies must follow the Standard for accountability when designating accountable official(s) and implementing this requirement. The responsibilities of accountable officials are set in the standard.
Agencies must notify the DTA when they designate and make any changes to their accountable official(s) by emailing ai@dta.gov.au.
Agencies must designate an accountable use case owner for each in-scope AI use case within 12 months of this policy taking effect. Accountable official(s) are to maintain a register of accountable use case owners.
Agencies must follow the Standard for accountability when implementing this requirement. The responsibilities of accountable use case owners are set in the standard.
Agencies must create a register of in-scope AI use cases to enable accountable official(s) to record accountable use case owners within 12 months of this policy taking effect.
Agencies must share the register with the DTA every 6 months, commencing from when they create the register to meet the above requirement.
The Standard for accountability lists the minimum fields agencies must capture in the use case register. Agencies can add additional fields to meet their organisational needs. An existing register may be reused for the purposes of meeting this requirement. The standard also provides the instructions for how to share agency registers with the DTA.
Version 2.0
The principles and requirements included in this section standardise key elements of AI governance that allow agencies to build AI capability and use AI responsibly.
Agencies must establish an approach to embed responsible AI practices within 12 months of this policy taking effect. This may vary according to the scale and scope of agency AI use.
At a minimum, the approach will provide an agency with:
Agencies may modify existing policies, procedures and frameworks, or create new ones. Smaller agencies with minimal AI adoption could amend existing documentation and/or assign key personnel to guide staff on responsible AI adoption on an ad hoc basis. Agencies with greater AI adoption could create dedicated AI policies, procedures and/or frameworks to support responsible adoption. Accountable officials are responsible for deciding the appropriate approach for their agency.
Agencies must implement mandatory training for all staff on responsible AI use within 12 months of this policy taking effect. Agencies should consider the Guidance for staff training on AI and can use the AI fundamentals training module to meet the requirement. They can use the module as provided, modify it, or incorporate it into an existing training program based on their specific context and requirements. Alternatively, agencies can allow their staff to access the module directly through APSLearn.
Agencies should implement additional training for staff as required, in consideration of their roles and responsibilities. For example, additional training for those responsible for the procurement, development, training and deployment of AI systems.
It is strongly recommended that agencies apply the AI technical standard for Australian Government. The standard is designed for Australian Government agencies adopting AI. It embeds the principles of fairness, transparency, and accountability into a set of technical requirements and guidelines.
It is strongly recommended that agencies refer to the Guidance on AI procurement in government when procuring AI products and services. The guidance offers practical, step-by-step advice to help agencies identify and manage AI-specific risks while maintaining procurement best practices.
Applying the Managing access to public generative AI tools guidance and the Using public generative AI tools safely and responsibly guidance.
Developing staff AI capability to effectively use AI and comply with AI policy and regulation.
Version 2.0
The principles and requirements in this section intend to assess potential impacts of AI use cases and ensure additional oversight of higher risk AI.
Agencies must assess all new AI use cases against the in-scope criteria (Appendix C) to determine if they are in scope of the policy. The assessment must be documented and take place during the design phase while developing requirements.
Agencies must begin AI use case assessments within 12 months of this policy taking effect.
For existing use cases not yet assessed, agencies must determine whether they are in scope of this policy and apply all relevant policy actions by 30 April 2027.
Where practicable, agencies should implement the requirements ahead of the deadlines listed above.
For AI use cases that are in-scope, agencies must conduct an AI use case impact assessment. Agencies are to commence an assessment at the design stage. Before the solution is deployed, agencies must finalise the assessment and apply any agreed risk treatments.
Agencies may conduct an AI use case impact assessment by using either:
Where an agency integrates the tool, they must ensure:
Agencies must be able to revise their internal process in response to any impact assessment tool updates.
Agencies must add each in-scope AI use case to their internal register of AI use cases and update it as required. Include risk rating and accountable use case owner changes. When deploying an in-scope AI use case, agencies must:
Agencies should also monitor changes that are not initiated by the agency. For example, vendor changes and changes in the regulatory environment. Agencies could also ask vendors to provide information on updates through contractual mechanisms.
If an agency determines their in-scope AI use case has an inherent medium-risk rating when completing an AI use case impact assessment, they should consider if the use case would benefit from being governed through a designated board or a senior executive. Agencies should choose an approach appropriate for the size and scope of the agency if they apply additional governance.
If an agency determines their in-scope AI use case has an inherent high-risk rating when completing an AI use case impact assessment, they must:
Once an agency has decided to deploy the use case, they must:
For use cases assessed as out of scope of this policy, agencies may adopt the use case while ensuring they comply with relevant existing obligations, such as privacy and security.
If an agency adopts an out-of-scope AI use case, they must assess whether the use case becomes in-scope of this policy if there is a material change in the scope, usage or operation of the solution.
If a use case is in scope, agencies must follow any applicable actions in this policy.
Version 2.0
While there are various definitions of what constitutes AI, for the purposes of this policy agencies are to apply the definition provided by the Organisation for Economic Co-operation and Development (OECD):
Version 2.0
Version 2.0