Diversity of perspective promotes inclusivity, mitigates biases, supports critical thinking, mitigates the risk of non-compliance with anti-discrimination laws and should be incorporated in all AI system lifecycle stages.
AI systems require input from stakeholders from a variety of backgrounds, including different ethnicities, genders, ages, abilities and socio-economic statuses. This also includes people with diverse professional backgrounds, such as ethicists, social scientists and domain experts relevant to the AI application. Determining which stakeholders and user groups to consult, which data to use, and the optimal team composition will depend on your AI system.
Failing to adequately incorporate diversity into relevant AI lifecycle stages can have unintended negative consequences, as illustrated in a number of real-world examples:
Resources, including approaches, templates and methods to ensure sufficient diversity and inclusion of your AI system, are described in the NAIC's Implementing Australia's AI Ethics Principles report.
You should consult an appropriate source of advice or otherwise ensure that your AI use case and use of data align with human rights obligations. If you have not done so, explain your reasoning.
It is recommended that you complete this question after you have completed the previous sections of the assessment. This will provide more complete information to enable an assessment of the human rights implications of your AI use case.
In Australia, it is unlawful to discriminate on the basis of a number of protected attributes including age, disability, race, sex, intersex status, gender identity and sexual orientation, in certain areas of public life including education and employment. Australia's federal anti discrimination laws are contained in the following legislation.
Human rights are defined in the Human Rights (Parliamentary Scrutiny) Act 2011 as the rights and freedoms contained in the 7 core international human rights treaties to which Australia is a party, namely the:
In addition to other rights referred to in this guidance, human rights you may consider as part of your assessment of the AI use case include:
Agencies should consider putting mechanisms in place during the life cycle of the AI system to ensure that the agency itself, or the relevant decision-maker, remains responsible and accountable for a government decision which involves the use of AI. Such mechanisms should clearly define how ultimate responsibility for the decision is retained, even when AI is used to analyse data or generate recommended outcomes.
Accountability should be considered at all stages of the AI system lifecycle. Some of the relevant considerations for different stages are outlined below.
This question looks to confirm that you have identified and documented any agency specific legislation, regulations, or binding policy instruments that are relevant to your AI use case.
When completing this section:
This section asks whether your agency has sought or obtained legal advice in relation to the AI use case. If you answer 'yes', you should summarise the nature of the legal issue without including the content of the advice. This information should not be disclosed to anyone other than those who need to know or access the information within the agency.
Note that including the actual content of legal advice in this tool may result in waiver of legal professional privilege, meaning the advice could be legally required to be disclosed to others. To avoid unintended waiver, only summarise the subject matter of the advice (for example, 'privacy compliance' or 'intellectual property risks') rather than reproducing or paraphrasing the advice itself.
To complete the risk summary table:
To complete this section, choose an overall residual risk rating for the AI use case. Refer to your response to section 12.3.
If your use case's inherent risk is rated as high at section 3, you are required under the AI policy to apply specific actions, including creating or reusing a governance body for the purpose of governing high-risk AI. You may document the outcome of the governance body review here, including any recommendations and next steps.
This table is designed to help you select the appropriate consequence level for the risk questions in sections 3.1 to 3.8. Examples are illustrative, not exhaustive.
| Risk | Insignificant | Minor | Moderate | Major | Severe |
|---|---|---|---|---|---|
| Negatively affecting public accessibility or inclusivity of government services | Insignificant compromises to accessibility or inclusivity of services. Minor technical issues causing brief inconvenience but no actual barriers to access or inclusion. Issues rapidly resolved with minimal impact on user experience. | Limited, reversable compromises to accessibility or inclusivity of services. Some people experience difficulties accessing services due to technical issues or design oversights. Barriers are short-term and addressed once identified, with additional support provided to people affected. | Many compromises are made to the accessibility or inclusivity of services. Considerable access challenges for a modest number of users. Resolving access issues requires substantial effort and resources. Certain groups may be disproportionately impacted. Affected users experience frustration and delays in receiving services. | Extensive compromises are made to the accessibility or inclusivity of services, may include some essential services. Ongoing delays that require external technical assistance to resolve. Widespread inconvenience, frustration, public distress and potential legal implications. Vulnerable user groups disproportionately impacted. | Widespread irreversible ongoing compromises are made to the accessibility or inclusivity of services, including some essential services. Majority of users, especially vulnerable groups affected. Essential services inaccessible for extended periods, causing significant public distress, legal implications, and a loss of trust in government efficiency. Comprehensive and immediate actions are urgently needed to rectify the situation. |
| Unfair discrimination against individuals, communities or groups | Negligible instances of discrimination, with virtually no discernible effect on individuals, communities, or groups. Issues are proactively identified and rapidly addressed before causing harm. | Limited instances of unfair discrimination occur, affecting a small number of individuals. Relatively isolated cases, and corrective measures minimise their impact. | Moderate levels of discrimination leading to noticeable harm to certain individuals, communities, or groups. These incidents raise bias and fairness concerns and require targeted interventions. | Significant discrimination results in major, tangible harm to individuals and multiple communities or groups. Rebuilding trust requires substantial reforms and remediation efforts. | Pervasive and systemic discrimination causes severe harm across a broad spectrum of the population, particularly marginalised and vulnerable groups. Public outrage, potential legal action, and a profound loss of trust in government. Immediate, sweeping reforms and accountability measures are required. |
| Perpetuating stereotyping or demeaning representations of individuals, communities or groups | Inadvertently reinforce mild stereotypes, but these instances are quickly identified and rectified with no lasting harm or public concern. | Isolated cases of stereotyping, affecting limited members of community with some noticing and raising concerns. Prompt action mitigates the issue, preventing broader impact. | Moderate stereotyping by AI systems leads to noticeable public discomfort and criticism. Disproportionally affecting certain communities or groups. Requires targeted corrective measures to address and prevent recurrence. | Significant and widespread reinforcement of harmful stereotypes and demeaning representations. Causes public outcry and damages the relationship between communities and government entities. Urgent, comprehensive strategies are needed to rectify these representations and restore trust. | Pervasive and damaging stereotyping severely harms multiple communities, leading to widespread distress. Potential legal consequences, and a profound breach of trust in government use of technology. Requires immediate, sweeping actions to address the harm, including system overhauls and public apologies. |
| Raising privacy concerns | Insignificant data handling errors occur without compromising sensitive information. Incidents are quickly rectified, maintaining public trust in data security. | Isolated exposure of limited sensitive data affects a small group of individuals. Swift actions taken to secure the data and prevent further incidents. | Breach of moderate amounts of sensitive data, leading to privacy concerns among the affected populace. Some individuals experience inconvenience and distress. | Serious misuse of sensitive private data affects a large segment of the population, leading to widespread privacy Comprehensive measures are urgently required to secure data and address the privacy breaches. | Significant potential to expose sensitive information of a vast number of individuals, causing severe harm, identity theft risks; use of sensitive personal information in a way that is likely to draw public criticism with limited ability for individuals to choose how their information is used. Significant potential to harm trust in government information handling with potential for lasting consequences. |
| Compromising privacy due to the sensitivity, amount or source of the data being used by an AI system | Insignificant data handling errors occur without compromising sensitive information. Incidents are quickly rectified, maintaining public trust in data security. | Isolated exposure of limited sensitive data affects a small group of individuals. Swift actions taken to secure the data and prevent further incidents. | Breach of moderate amounts of sensitive data, leading to privacy concerns among the affected populace. Some individuals experience inconvenience and distress. | Serious misuse of sensitive private data affects a large segment of the population, leading to widespread privacy violations and a loss of public trust. Comprehensive measures are urgently required to secure data and address the privacy breaches. | Significant potential to expose sensitive information of a vast number of individuals, causing severe harm, identity-theft risks; use of sensitive personal information in a way that is likely to draw public criticism with limited ability for individuals to choose how their information is used. Significant potential to harm trust in government information handling with potential for lasting consequences. |
| Raising security concerns due to the sensitivity or classification of the data being used by an AI system | Inconsequential security lapses occur without actual misuse of sensitive data. Quickly identified and corrected with no real harm done. These types of incidents may serve as prompts for reviewing security protocols. | A limited security breach involves unauthorised access to protected data affecting a small number of records with minimal impact. Immediate actions secure the breach, and affected individuals are notified and supported. Incident is catalyst for review of security protocols. | Security incident leads to the compromise of a moderate volume of sensitive data, raising concerns over data protection and privacy. The breach necessitates a thorough investigation, enhanced security measures. | A significant security breach results in extensive unauthorised access to sensitive or protected data, causing considerable concern and distress among the public. Urgent security upgrades and support measures for impacted individuals are implemented. to restore security and trust. | A massive security breach exposes a vast amount of sensitive and protected data, leading to severe implications for national security, public safety, and individual privacy. This incident triggers an emergency response, including legal actions, a major overhaul of security systems, and long-term support for those affected. |
| Raising security concerns due to implementation, sourcing or characteristics of the AI system | Inconsequential security concerns arise due to characteristics of the AI system, such as software bugs, which are promptly identified and fixed with no adverse effects on overall security. These issues may serve as lessons, leading to slight improvements in the system's security framework. | Certain characteristics of the AI system lead to vulnerabilities that are exploited in a limited manner, causing minor security breaches. Immediate remediation measures are taken, and the system is updated to prevent similar issues. | A moderate security risk is realised when intrinsic features of the AI system allow for unintended access or data leaks. Incident affects a noticeable but contained component of the AI system. Prompts a comprehensive security review of the AI system and the implementation of more robust safeguards. | Significant security flaws in the AI system's design result in major breaches, compromising a large amount of data and severely affecting system integrity. Incident leads to an urgent overhaul of security measures and protocols, alongside efforts to mitigate the damage. | Critical security vulnerabilities inherent to the AI system lead to widespread breaches, exposing vast quantities of sensitive data and jeopardising national security or public safety. The incident results in severe consequences, necessitating emergency responses, extensive system redesigns, and long-term efforts to recover from the breach and prevent recurrence. |
| Posing a reputational risk or undermining public confidence in the government | Isolated reputational issues arise, quickly addressed and explained. Causes negligible damage to public trust in government capabilities. | Small-scale AI mishaps lead to brief public concern, slightly denting the government's reputation. Prompt clarification and corrective measures minimize long-term impact on public confidence Seen by the government as poor management. | Misapplications result in moderate public dissatisfaction and questioning of government oversight. Requires remedial actions to mend trust and address concerns. Seen by government and opposition as failed management. | Widespread public scepticism and criticism, majorly affecting the government's image. Requires substantial efforts to rebuild public confidence through transparency, accountability, and improvement of AI governance. High profile negative stories, seen by government and opposition as significant failed management. | Severe misuse or failure of AI systems leads to profound public distrust and criticism. Significantly undermining confidence in government effectiveness and integrity. Requires comprehensive, long-term strategies for rehabilitation of public trust, including systemic changes and ongoing engagement. Seen by government and opposition as catastrophic failure of management. Minister expresses loss of confidence or trust in agency. |
Version 2.0
This version of the policy (v2.0) is effective 15 December 2025. The first version (v1.1) took effect on 1 September 2024.
It applies to all non-corporate Commonwealth entities, with some exceptions.
Departments and agencies must meet the mandatory requirements for:
Version 2.0
This policy aims to ensure that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations.
This policy aims to provide a unified approach to enable government to accelerate AI adoption and embrace the AI opportunity. It is designed to reduce barriers to government adoption by helping agencies confidently approach AI governance and implementation.
It aims to ensure agencies have the right settings in place to take advantage of the opportunities presented by AI and fully realise benefits such as increased and improved efficiency, accuracy and service delivery.
This policy aims to strengthen public trust in government adoption of AI by positioning the Australian Government as an exemplar in safe and responsible AI use.
It is designed to enable the responsible use of AI across government, through setting consistent requirements for transparency and accountability, and by requiring risk-based oversight of AI use cases.
This policy aims to embed a forward-leaning, adaptive approach for government’s use of AI that evolves as the technological and policy environment changes.
It supports agencies at different stages of their AI adoption journey and sets requirements that scale with the agency’s use of AI.
This version of the policy (v2.0) is effective 15 December 2025. It replaces version v1.1 of the policy which came into effect 1 September 2024.
All non-corporate Commonwealth entities (NCEs), as defined by the Public Governance, Performance and Accountability Act 2013, must apply this policy.
Corporate Commonwealth entities are also encouraged to apply this policy.
This policy does not apply to:
The NIC includes:
Defence and members of the NIC may voluntarily adopt elements of this policy where they are able to do so without compromising national security capabilities or interests.
The challenges raised by government use of AI are complex and inherently linked with other considerations, such as the APS Code of Conduct, data governance, cyber security, privacy and ethics practices.
This policy has been designed to complement and strengthen – not duplicate – existing frameworks, legislation and practices that touch on government’s use of AI.
This policy must be read and applied alongside existing frameworks and laws to ensure agencies meet all their obligations.
Version 2.0
Agencies must make a publicly available statement outlining their approach to AI adoption and use, as prescribed under the Standard for transparency statements.
The statement must be reviewed and updated annually or sooner, should the agency make significant changes to its approach to AI.
Agencies must notify the DTA when they publish and make any changes to their AI transparency statement by emailing ai@dta.gov.au.
Agencies must develop a strategic position on AI adoption within 6 months of this policy taking effect. This position is to emphasise how AI opportunities can be identified and embraced by the agency.
Agencies must communicate their strategic position on AI to give staff clear direction on AI adoption. In line with their current and anticipated use of AI, agencies can develop a standalone AI strategy, augment an existing strategy or create other materials to communicate the approach to staff.
Agencies must designate accountable official(s) to take accountability for implementing this policy.
Agencies must follow the Standard for accountability when designating accountable official(s) and implementing this requirement. The responsibilities of accountable officials are set in the standard.
Agencies must notify the DTA when they designate and make any changes to their accountable official(s) by emailing ai@dta.gov.au.
Agencies must designate an accountable use case owner for each in-scope AI use case within 12 months of this policy taking effect. Accountable official(s) are to maintain a register of accountable use case owners.
Agencies must follow the Standard for accountability when implementing this requirement. The responsibilities of accountable use case owners are set in the standard.
Agencies must create a register of in-scope AI use cases to enable accountable official(s) to record accountable use case owners within 12 months of this policy taking effect.
Agencies must share the register with the DTA every 6 months, commencing from when they create the register to meet the above requirement.
The Standard for accountability lists the minimum fields agencies must capture in the use case register. Agencies can add additional fields to meet their organisational needs. An existing register may be reused for the purposes of meeting this requirement. The standard also provides the instructions for how to share agency registers with the DTA.
Version 2.0
The principles and requirements included in this section standardise key elements of AI governance that allow agencies to build AI capability and use AI responsibly.
Agencies must establish an approach to embed responsible AI practices within 12 months of this policy taking effect. This may vary according to the scale and scope of agency AI use.
At a minimum, the approach will provide an agency with:
Agencies may modify existing policies, procedures and frameworks, or create new ones. Smaller agencies with minimal AI adoption could amend existing documentation and/or assign key personnel to guide staff on responsible AI adoption on an ad hoc basis. Agencies with greater AI adoption could create dedicated AI policies, procedures and/or frameworks to support responsible adoption. Accountable officials are responsible for deciding the appropriate approach for their agency.
Agencies must implement mandatory training for all staff on responsible AI use within 12 months of this policy taking effect. Agencies should consider the Guidance for staff training on AI and can use the AI fundamentals training module to meet the requirement. They can use the module as provided, modify it, or incorporate it into an existing training program based on their specific context and requirements. Alternatively, agencies can allow their staff to access the module directly through APSLearn.
Agencies should implement additional training for staff as required, in consideration of their roles and responsibilities. For example, additional training for those responsible for the procurement, development, training and deployment of AI systems.
It is strongly recommended that agencies apply the AI technical standard for Australian Government. The standard is designed for Australian Government agencies adopting AI. It embeds the principles of fairness, transparency, and accountability into a set of technical requirements and guidelines.
It is strongly recommended that agencies refer to the Guidance on AI procurement in government when procuring AI products and services. The guidance offers practical, step-by-step advice to help agencies identify and manage AI-specific risks while maintaining procurement best practices.
Applying the Managing access to public generative AI tools guidance and the Using public generative AI tools safely and responsibly guidance.
Developing staff AI capability to effectively use AI and comply with AI policy and regulation.
Version 2.0
The principles and requirements in this section intend to assess potential impacts of AI use cases and ensure additional oversight of higher risk AI.
Agencies must assess all new AI use cases against the in-scope criteria (Appendix C) to determine if they are in scope of the policy. The assessment must be documented and take place during the design phase while developing requirements.
Agencies must begin AI use case assessments within 12 months of this policy taking effect.
For existing use cases not yet assessed, agencies must determine whether they are in scope of this policy and apply all relevant policy actions by 30 April 2027.
Where practicable, agencies should implement the requirements ahead of the deadlines listed above.
For AI use cases that are in-scope, agencies must conduct an AI use case impact assessment. Agencies are to commence an assessment at the design stage. Before the solution is deployed, agencies must finalise the assessment and apply any agreed risk treatments.
Agencies may conduct an AI use case impact assessment by using either:
Where an agency integrates the tool, they must ensure:
Agencies must be able to revise their internal process in response to any impact assessment tool updates.
Agencies must add each in-scope AI use case to their internal register of AI use cases and update it as required. Include risk rating and accountable use case owner changes. When deploying an in-scope AI use case, agencies must:
Agencies should also monitor changes that are not initiated by the agency. For example, vendor changes and changes in the regulatory environment. Agencies could also ask vendors to provide information on updates through contractual mechanisms.
If an agency determines their in-scope AI use case has an inherent medium-risk rating when completing an AI use case impact assessment, they should consider if the use case would benefit from being governed through a designated board or a senior executive. Agencies should choose an approach appropriate for the size and scope of the agency if they apply additional governance.
If an agency determines their in-scope AI use case has an inherent high-risk rating when completing an AI use case impact assessment, they must:
Once an agency has decided to deploy the use case, they must:
For use cases assessed as out of scope of this policy, agencies may adopt the use case while ensuring they comply with relevant existing obligations, such as privacy and security.
If an agency adopts an out-of-scope AI use case, they must assess whether the use case becomes in-scope of this policy if there is a material change in the scope, usage or operation of the solution.
If a use case is in scope, agencies must follow any applicable actions in this policy.
Version 2.0
While there are various definitions of what constitutes AI, for the purposes of this policy agencies are to apply the definition provided by the Organisation for Economic Co-operation and Development (OECD):
Version 2.0
Version 2.0
Version 2.0