8. Transparency and explainability

8.1 Consultation

You should consult with a diverse range of internal and external stakeholders at every stage of your AI use case development and deployment to help identify potential biases, privacy concerns, and other ethical and legal issues present in your AI use case. This process can also help foster transparency, accountability, and trust with your stakeholders and can help improve their understanding of the technology's benefits and limitations. Refer to the stakeholders you identified in section 2.4.

If your project has the potential to significantly impact First Nations individuals, communities or groups, it is critical that you meaningfully consult with relevant community representatives.

Consultation resources

APS Framework for Engagement and Participation: sets principles and standards that underpin effective APS engagement with citizens, community and business and includes practical guidance on engagement methods.

Best practice consultation guidance note: this resource from the Office of Impact Analysis details the Australian Government consultation principles outlined in the Guide to Policy Impact Analysis.

Principles for engagement in projects concerning Aboriginal and Torres Strait Islander peoples: this resource from the Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS) provides non-Indigenous policy makers and service designers with the foundational principles for meaningfully engaging with Aboriginal and Torres Strait Islander peoples on projects that impact their communities.

8.2 Public visibility

Where appropriate, you should consider options to make the scope and goals of your AI use case publicly available. For instance, consider including this information on the relevant program page on your agency website or through other official communications. This information could include:

  • use case purpose
  • overview of model and application, including how the AI will use data to provide relevant outputs
  • benefits
  • risks and mitigations
  • training data sources
  • contact information for public enquiries.

All agencies in scope of the AI policy are required to publish an AI transparency statement. Your agency's AI accountable official is responsible for ensuring your agency's transparency statement complies with the AI policy. More information on this requirement is contained in the AI policy and associated Standard for transparency statements. Consult your agency's AI accountable official for specific advice on your use case.

Furthermore, to comply with APP 1 and APP 5, agencies should consider updating their privacy policies with information about their use of AI. For example, to advise that personal information may be disclosed to AI system developers or owners.

Considerations for publishing

In some circumstances it may not be appropriate to publish detailed information about your AI use case. When deciding whether to publish this information you should balance the public benefits of AI transparency with the potential risks as well as compatibility with any legal requirements around publication.

For example, you may choose to limit the information you publish, or not publish any information at all, if the use case is still in the experimentation phase, or if publishing may:

  • have negative implications for national security
  • have negative implications for law enforcement or criminal intelligence activities
  • significantly increase the risk of fraud or non-compliance
  • significantly increase the risk of cybersecurity threats
  • jeopardise commercial competitiveness – for example, revealing trade secrets or commercially valuable information
  • breach confidentiality obligations held by the agency under a contract
  • breach statutory secrecy provisions.

8.3 Maintain appropriate documentation and records

Agencies should comply with legislation, policies and standards for maintaining reliable and auditable records of decisions, testing, and the information and data assets used in an AI system. This will enable internal and external scrutiny, continuity of knowledge and accountability. For example, when responding to information requests under the Freedom of Information Act 1982 (Cth). This will also support transparency across the AI supply chain. For example, this documentation may be useful to any downstream users of AI models or systems developed by your agency.

Agencies should document AI technologies they are using to perform government functions as well as essential information about AI models, their versions, creators and owners. In addition, artefacts used and produced by AI – such as prompts, inputs and raw outputs – may constitute Commonwealth records under the Archives Act 1983 and may need to be kept for certain periods of time identified in records authorities issued by the National Archives of Australia (NAA). Such Commonwealth records must not be destroyed, disposed of, transferred, damaged or altered except in limited circumstances listed in the Archives Act.

To identify their legal obligations, business areas implementing AI in agencies may want to consult with their information and records management teams. The NAA can also provide advice on how to manage data and records produced by different AI use cases.

Refer to NAA advice on:

AI documentation types

Where suitable, you should consider creating the following forms of documentation for any AI system you build. If you are procuring an AI system from an external provider, it may be appropriate to request these documents as part of your tender process.

System factsheet/model card

A system factsheet (sometimes called a model card) is a short document designed to provide an overview of an AI system to non-technical audiences (such as users, members of the public, procurers, and auditors). These factsheets usually include information about the AI system's purpose, intended use, limitations, training data, and performance against key metrics.

Datasheets

Datasheets are documents completed by dataset creators to provide an overview of the data used to train and evaluate an AI system. Datasheets provide key information about the dataset including its contents, data owners, composition, intended uses, sensitivities, provenance, labelling and representativeness.

System decision registries

System decision registries record key decisions made during the development and deployment of an AI system. These registries contain information about what decisions were made, when they were made, who made them and why they were made (the decision rationale).

Reliability and safety documentation

It is also best practice to maintain documentation on testing, piloting and monitoring and evaluation of your AI system and use case, in line with the practices outlined in section 6.

For more on AI documentation, see Implementing Australia's AI Ethics Principles.

8.4 Disclosing AI interactions and outputs

You should design your use case to inform people that they are interacting with an AI system or are being exposed to content that has been generated by AI. This includes disclosing AI interactions and outputs to internal agency staff and decision-makers, as well as external parties such as members of the public engaging with government.

When to disclose use of AI

You should ensure that you disclose when a user is directly interacting with an AI system, especially:

  • when AI plays a significant role in critical decision-making processes
  • when AI has potential to influence opinions, beliefs or perceptions
  • where there is a legal requirement regarding AI disclosure (for example, updated privacy policies under APP 1 and APP 5)
  • where AI is used to generate recommendations for content, products or services.

You should ensure that you disclose when someone is being exposed to AI-generated content including where:

  • any of the content has not been through a contextually appropriate degree of fact checking and editorial review by a human with the appropriate skills, knowledge or experience in the relevant subject matter
  • the content purports to portray real people, places or events or could be misinterpreted that way
  • the intended audience for the content would reasonably expect disclosure
  • there is a legal requirement regarding AI disclosure (for example, updated privacy policies under APP 1 and APP 5).

Exercise judgment and consider the level of disclosure that the intended audience would expect, including where AI-generated content has been through rigorous fact-checking and editorial review. Err on the side of greater disclosure – norms around appropriate disclosure will continue to develop as AI-generated content becomes more ubiquitous.

Mechanisms for disclosure of AI interactions

When designing or procuring an AI system, you should consider the most appropriate mechanism(s) for disclosing AI interactions. Some examples are outlined below:

Verbal or written disclosures

Verbal or written disclosures are statements that are heard by or shown to users to inform that they are interacting with (or will be interacting with) an AI system.

For example, disclaimers/warnings, specific clauses in privacy policy and/or terms of use, content labels, visible watermarks, by-lines, physical signage, communication campaigns.

Behavioural disclosures 

Behavioural disclosure refers to the use of stylistic indicators that help users to identify that they are engaging with AI-generated content. These indicators should generally be used in combination with other forms of disclosure.

For example, using clearly synthetic voices or formal, structured language, robotic avatars.

Technical disclosures

Technical disclosures are machine-readable identifiers for AI generated content.

For example, inclusion in metadata, technical watermarks, cryptographic signatures.

Agencies should consider using AI systems that use industry-standard provenance technologies, such as those aligned with the standard developed by the Coalition for Content Provenance and Authenticity (C2PA).

Ability to request a non-AI alternative

In certain contexts, it may be best practice not to provide a non-AI alternative, particularly where the AI system is low-risk, improves service delivery without affecting rights or entitlements, and where alternate pathways would create unnecessary cost, complexity, or delay. However, in other situations, offering the ability to request a non-AI alternative can be important.

8.5 Offer appropriate explanations

Explainability refers to accurately and effectively conveying an AI system's decision process to a stakeholder, even if they don't fully understand the specifics of how the model works. Explainability facilitates procedural fairness, transparency, independent expert scrutiny and access to justice by ensuring that agencies have the material that is required to provide affected individuals with evidence that forms the basis of a decision when needed. To interpret the AI's output and offer an explanation to relevant stakeholders, you should consider whether the agency can access:

  • the inputs from the agency
  • the logic behind an individual output
  • the model that the AI System uses and the sources of data for the model
  • information on which features of the AI contributed to the output
  • automatic records of events which allow for traceability of the AI's functioning
  • any risk management measures in place which would allow the agency to understand risks and adjust use of the AI accordingly (for example, technical limitations such as error rates of an AI model).

You should be able to clearly explain how a government decision or outcome has been made or informed by AI to a range of technical and non-technical audiences. You should also be aware of any requirements in legislation to provide reasons for decisions, both generally and in relation to the particular class of decisions that you are seeking to make using AI.

Explanations may apply globally (how a model broadly works) or locally (why the model has come to a specific decision). You should determine which is more appropriate for your audience.

Principles for providing effective explanations

Contrastive

Outline why the AI system output one outcome instead of another outcome.

Selective

Focus on the most-relevant factors contributing to the AI system's decision process.

Consistent with the audience's understanding

Align with the audience's level of technical (or non-technical) background.

Generalisation to similar cases

Generalise to similar cases to help the audience predict what the AI system will do.

Tools for explaining non-interpretable models

Providing explanations is relatively straightforward for interpretable models with low complexity and clear parameters. However, in practice, most AI systems have low interpretability and require effective post-hoc explanations that balance accuracy and simplicity. Among other matters, you should also consider defining appropriate timeframes for providing explanations in the context of your use case.

When developing explanations, consider the range of available approaches based on your model type and use case.

  • For traditional machine learning models, feature importance methods and visualisation techniques can help explain individual predictions or overall model behaviour.
  • For neural networks and deep learning systems, specialised interpretation methods have been developed that analyse network activations, attention patterns, and gradients.
  • Large language models and foundation models require distinct approaches, including prompt-based explanations and emergent interpretability techniques.
  • Model-agnostic methods offer flexibility across different architectures, while example-based approaches use counterfactuals and contrastive examples to make predictions more understandable.

Advice on appropriate explanations is available in the National AI Centre's Implementing Australia's AI Ethics Principles report.

Other reputable resources for explainability tools include open-source libraries maintained by academic institutions and research communities and documentation from major cloud platform providers. When selecting tools, prioritise those with active maintenance, clear documentation, and validation through published research.

However, explainable AI algorithms are not the only way to improve system explainability. Human-centred design can also play an important part, including:

  • developing effective explanation interfaces tailored to different stakeholder audiences
  • determining appropriate levels of detail for various contexts
  • ensuring explanations are actionable and meaningful for decision-makers.

9. Contestability

9.1 Notification of AI affecting rights

You should notify individuals, groups, communities or businesses when an administrative action materially influenced by an AI system has a legal or significant effect on them. This promotes transparency and access to justice, by ensuring individuals can understand how government uses AI to perform actions that affect them and have the opportunity to seek review of that decision.

This notification should state that the action was materially influenced by an AI system and include information on available review rights and how the individual can challenge the action. The notification should be clear, up-to-date, concise and understandable, and should not be complex, lengthy, legalistic or vague. It may be appropriate to provide notification prior to the action being taken or at the same time that the action occurs (for example, an applicant may be asked to acknowledge that AI will be used to a stated extent to assess their application).

An action producing a 'legal effect' is when an individual, group, community or business's legal status or rights are affected, and includes an effect on the:

  • provision of rights or benefits granted by legislation or common law
  • imposition of penalties or orders (civil or criminal), and
  • contractual rights.

An action producing a 'significant effect' is when an individual, group, community or business's circumstances, behaviours, interests or choices are affected, and includes an effect on the provision of:

  • critical government services or support, such as housing, insurance, education enrolment, criminal justice, employment opportunities and health, disability or aged care services
  • basic necessities, such as food and water.

An action may be considered to have been 'materially influenced' by an AI system if:

  • the action was automated by an AI system, with little to no human oversight
  • a component of the action was automated by an AI system, with little to no human oversight – for example, a computer performs the first 2 limbs of an action, with the final limb made by a human
  • the AI system is likely to influence actions that are performed – for example, the AI system output recommended a decision to a human for consideration or provided substantive analysis to inform a decision.

'Administrative action' is any of the following:

  • making, or refusing or failing to make, a decision
  • exercising, or refusing or failing to exercise, a power
  • performing, or refusing or failing to perform, a function or duty.

Advisory note

This guidance is designed to supplement, not replace, existing administrative law requirements pertaining to notification of administrative decisions. The Attorney-General's Department is leading work to develop a consistent legislative framework for automated decision-making (ADM), as part of the government's response to recommendation 17.1 of the Robodebt Royal Commission Report.

9.2 Challenging administrative actions influenced by AI

Individuals, groups, communities or businesses should be provided with a timely opportunity to challenge an administrative action that has a legal or significant effect on them when the action was materially influenced by an AI system. This is an important administrative law principle. It also promotes accountability and improves the quality and consistency of government decisions.

Administrative actions may be subject to both merits review and judicial review.

10. Human-centred values

10.1 Incorporating diversity

Diversity of perspective promotes inclusivity, mitigates biases, supports critical thinking, mitigates the risk of non-compliance with anti-discrimination laws and should be incorporated in all AI system lifecycle stages.

AI systems require input from stakeholders from a variety of backgrounds, including different ethnicities, genders, ages, abilities and socio-economic statuses. This also includes people with diverse professional backgrounds, such as ethicists, social scientists and domain experts relevant to the AI application. Determining which stakeholders and user groups to consult, which data to use, and the optimal team composition will depend on your AI system.

Failing to adequately incorporate diversity into relevant AI lifecycle stages can have unintended negative consequences, as illustrated in a number of real-world examples:

  • AI systems ineffective at predicting recidivism outcomes for defendants of colour and underestimating the health needs of patients from marginalised racial and ethnic backgrounds.
  • AI job recruitment systems unfairly affecting employment outcomes.
  • Algorithms used to prioritise patients for high-risk care management programs were less likely to refer black patients than white patients with the same level of health.
  • An AI system designed to detect cancers had shown biases towards lighter skin tones stemming from an oversight in collecting a more diverse set of skin tone images, potentially delaying life-saving treatments.

Resources, including approaches, templates and methods to ensure sufficient diversity and inclusion of your AI system, are described in the NAIC's Implementing Australia's AI Ethics Principles report.

10.2 Human rights obligations

You should consult an appropriate source of advice or otherwise ensure that your AI use case and use of data align with human rights obligations. If you have not done so, explain your reasoning.

It is recommended that you complete this question after you have completed the previous sections of the assessment. This will provide more complete information to enable an assessment of the human rights implications of your AI use case.

In Australia, it is unlawful to discriminate on the basis of a number of protected attributes including age, disability, race, sex, intersex status, gender identity and sexual orientation, in certain areas of public life including education and employment. Australia's federal anti discrimination laws are contained in the following legislation.

Human rights are defined in the Human Rights (Parliamentary Scrutiny) Act 2011 as the rights and freedoms contained in the 7 core international human rights treaties to which Australia is a party, namely the:

  • International Covenant on Civil and Political Rights (ICCPR)
  • International Covenant on Economic, Social and Cultural Rights (ICESCR)
  • International Convention on the Elimination of All Forms of Racial Discrimination (CERD)
  • Convention on the Elimination of All Forms of Discrimination against Women (CEDAW)
  • Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (CAT)
  • Convention on the Rights of the Child (CRC)
  • Convention on the Rights of Persons with Disabilities (CRPD).

In addition to other rights referred to in this guidance, human rights you may consider as part of your assessment of the AI use case include:

  • a right to privacy – for example, where AI is being used for tracking and surveillance)
  • freedom of expression and information – for example, where AI is used to moderate a forum and therefore possibly suppress legitimate forms of expression
  • human agency – for example, where AI makes an automated decision on an individual's behalf.

11. Accountability

11.1 Ensuring accountability during the life cycle of the AI system

Agencies should consider putting mechanisms in place during the life cycle of the AI system to ensure that the agency itself, or the relevant decision-maker, remains responsible and accountable for a government decision which involves the use of AI. Such mechanisms should clearly define how ultimate responsibility for the decision is retained, even when AI is used to analyse data or generate recommended outcomes.

Accountability should be considered at all stages of the AI system lifecycle. Some of the relevant considerations for different stages are outlined below.

During the design and development phase

  • How the AI will be constructed in a way that is consistent with the scope of a decision-maker’s discretion and any legislative framework which confers authority on a decision-maker
  • How the AI will be designed in a way that ensures that the decision-maker takes into account any matters which it is required to consider as part of decision-making
  • Whether the decision-maker will have the ability to override or disregard decisions made by AI (for example, where its outputs are based on biased data).

During the deployment and operation phase

  • What information the decision-maker needs to have oversight of the AI (for example, information on the capacities and limitations of the AI, and the process that the AI will use to reach a conclusion)
  • What processes are in place to ensure that, where appropriate, final discretion or judgement lies with the decision-maker (for example, the decision-maker analyses the information provided by the AI before making a decision)
  • What records will be kept of the decision-maker’s reasoning at any decision points which require discretion and judgment. This contributes to the contestability of decisions in accordance with section 9.

Where the scope of the use case changes or is developed during the life of the AI system

  • What assessment, review and acceptance-testing processes are to be applied to the changes to the AI system to ensure the above.

12. Use case review and next steps

12.1 Alignment with relevant legal frameworks

This question looks to confirm that you have identified and documented any agency specific legislation, regulations, or binding policy instruments that are relevant to your AI use case.

When completing this section:

  • review your agency's legislative and regulatory frameworks. Identify any provisions that may be affected by, or place restrictions on, the design, operation, or outputs of the AI system
  • if there is any uncertainty, engage your agency's legal area early, and maintain legal professional privilege where appropriate.

12.2 Legal advice

This section asks whether your agency has sought or obtained legal advice in relation to the AI use case. If you answer 'yes', you should summarise the nature of the legal issue without including the content of the advice. This information should not be disclosed to anyone other than those who need to know or access the information within the agency.

Note that including the actual content of legal advice in this tool may result in waiver of legal professional privilege, meaning the advice could be legally required to be disclosed to others. To avoid unintended waiver, only summarise the subject matter of the advice (for example, 'privacy compliance' or 'intellectual property risks') rather than reproducing or paraphrasing the advice itself.

12.3 Risk summary table

To complete the risk summary table:

  • list any risks assessed as medium or high at the inherent risk assessment stage in section 3
  • summarise any mitigations or controls that have been or will be applied
  • explain how these mitigations have influenced the residual risk rating

12.4 Record of overall residual risk rating

To complete this section, choose an overall residual risk rating for the AI use case. Refer to your response to section 12.3.

12.5 Internal governance body review

If your use case's inherent risk is rated as high at section 3, you are required under the AI policy to apply specific actions, including creating or reusing a governance body for the purpose of governing high-risk AI. You may document the outcome of the governance body review here, including any recommendations and next steps.

Appendix: Risk consequence guidance table

This table is designed to help you select the appropriate consequence level for the risk questions in sections 3.1 to 3.8. Examples are illustrative, not exhaustive.

Risk consequence table showing severity levels from insignificant to severe across different risk categories
RiskInsignificantMinorModerateMajorSevere
Negatively affecting public accessibility or inclusivity of government services

Insignificant compromises to accessibility or inclusivity of services.

Minor technical issues causing brief inconvenience but no actual barriers to access or inclusion.

Issues rapidly resolved with minimal impact on user experience.

Limited, reversable compromises to accessibility or inclusivity of services.

Some people experience difficulties accessing services due to technical issues or design oversights.

Barriers are short-term and addressed once identified, with additional support provided to people affected.

Many compromises are made to the accessibility or inclusivity of services.

Considerable access challenges for a modest number of users.

Resolving access issues requires substantial effort and resources.

Certain groups may be disproportionately impacted.

Affected users experience frustration and delays in receiving services.

Extensive compromises are made to the accessibility or inclusivity of services, may include some essential services.

Ongoing delays that require external technical assistance to resolve.

Widespread inconvenience, frustration, public distress and potential legal implications.

Vulnerable user groups disproportionately impacted.

Widespread irreversible ongoing compromises are made to the accessibility or inclusivity of services, including some essential services.

Majority of users, especially vulnerable groups affected.

Essential services inaccessible for extended periods, causing significant public distress, legal implications, and a loss of trust in government efficiency.

Comprehensive and immediate actions are urgently needed to rectify the situation.

Unfair discrimination against individuals, communities or groups

Negligible instances of discrimination, with virtually no discernible effect on individuals, communities, or groups.

Issues are proactively identified and rapidly addressed before causing harm.

Limited instances of unfair discrimination occur, affecting a small number of individuals.

Relatively isolated cases, and corrective measures minimise their impact.

Moderate levels of discrimination leading to noticeable harm to certain individuals, communities, or groups.

These incidents raise bias and fairness concerns and require targeted interventions.

Significant discrimination results in major, tangible harm to individuals and multiple communities or groups.

Rebuilding trust requires substantial reforms and remediation efforts.

Pervasive and systemic discrimination causes severe harm across a broad spectrum of the population, particularly marginalised and vulnerable groups.

Public outrage, potential legal action, and a profound loss of trust in government.

Immediate, sweeping reforms and accountability measures are required.

Perpetuating stereotyping or demeaning representations of individuals, communities or groupsInadvertently reinforce mild stereotypes, but these instances are quickly identified and rectified with no lasting harm or public concern.

Isolated cases of stereotyping, affecting limited members of community with some noticing and raising concerns.

Prompt action mitigates the issue, preventing broader impact.

Moderate stereotyping by AI systems leads to noticeable public discomfort and criticism.

Disproportionally affecting certain communities or groups.

Requires targeted corrective measures to address and prevent recurrence.

Significant and widespread reinforcement of harmful stereotypes and demeaning representations.

Causes public outcry and damages the relationship between communities and government entities.

Urgent, comprehensive strategies are needed to rectify these representations and restore trust.

Pervasive and damaging stereotyping severely harms multiple communities, leading to widespread distress.

Potential legal consequences, and a profound breach of trust in government use of technology.

Requires immediate, sweeping actions to address the harm, including system overhauls and public apologies.

Harm to individuals, communities, groups, businesses or the environment

Inconsequential glitches with no real harm to the public, business operations or ecosystems.

Easily managed through routine measures.

Isolated incidents mildly affecting the public.

Slight inconveniences or disruptions to businesses, leading to manageable financial costs.

Limited manageable environmental disturbances affecting local ecosystems or resource consumption.

Noticeable negative effects on the public.

Businesses face operational challenges or financial losses, affecting their competitiveness.

Obvious environmental degradation, including pollution or habitat disruption, prompting public concern.

Significant public harm causing distress and potentially lasting damage.

Significant harm to a wide range of businesses, resulting in substantial financial losses, layoffs, and long-term reputational damage.

Compromises ecosystem wellbeing causing substantial pollution, loss of biodiversity, and resource depletion.

Widespread, profound harm and severe distress affecting broad segments of the public.

Profound damage across the business sector, leading to bankruptcies, major job losses, and a lasting negative impact on the economy.

Comprehensive environmental destruction, leading to critical loss of biodiversity, irreversible ecosystem damage, and severe resource scarcity.

Compromising privacy due to the sensitivity, amount or source of the data being used by an AI system

Insignificant data handling errors occur without compromising sensitive information.

Incidents are quickly rectified, maintaining public trust in data security.

Isolated exposure of limited sensitive data affects a small group of individuals.

Swift actions taken to secure the data and prevent further incidents.

Breach of moderate amounts of sensitive data, leading to privacy concerns among the affected populace.

Some individuals experience inconvenience and distress.

Serious misuse of sensitive private data affects a large segment of the population, leading to widespread privacy violations and a loss of public trust.

Comprehensive measures are urgently required to secure data and address the privacy breaches.

Significant potential to expose sensitive information of a vast number of individuals, causing severe harm, identity-theft risks; use of sensitive personal information in a way that is likely to draw public criticism with limited ability for individuals to choose how their information is used.

Significant potential to harm trust in government information handling with potential for lasting consequences.

Raising security concerns due to the sensitivity or classification of the data being used by an AI system

Inconsequential security lapses occur without actual misuse of sensitive data.

Quickly identified and corrected with no real harm done.

These types of incidents may serve as prompts for reviewing security protocols.

A limited security breach involves unauthorised access to protected data affecting a small number of records with minimal impact.

Immediate actions secure the breach, and affected individuals are notified and supported.

Incident is catalyst for review of security protocols.

Security incident leads to the compromise of a moderate volume of sensitive data, raising concerns over data protection and privacy.

The breach necessitates a thorough investigation, enhanced security measures.

A significant security breach results in extensive unauthorised access to sensitive or protected data, causing considerable concern and distress among the public.

Urgent security upgrades and support measures for impacted individuals are implemented. to restore security and trust.

A massive security breach exposes a vast amount of sensitive and protected data, leading to severe implications for national security, public safety, and individual privacy.

This incident triggers an emergency response, including legal actions, a major overhaul of security systems, and long-term support for those affected.

Raising security concerns due to implementation, sourcing or characteristics of the AI system

Inconsequential security concerns arise due to characteristics of the AI system, such as software bugs, which are promptly identified and fixed with no adverse effects on overall security.

These issues may serve as lessons, leading to slight improvements in the system's security framework.

Certain characteristics of the AI system lead to vulnerabilities that are exploited in a limited manner, causing minor security breaches.

Immediate remediation measures are taken, and the system is updated to prevent similar issues.

A moderate security risk is realised when intrinsic features of the AI system allow for unintended access or data leaks.

Incident affects a noticeable but contained component of the AI system.

Prompts a comprehensive security review of the AI system and the implementation of more robust safeguards.

Significant security flaws in the AI system's design result in major breaches, compromising a large amount of data and severely affecting system integrity.

Incident leads to an urgent overhaul of security measures and protocols, alongside efforts to mitigate the damage.

Critical security vulnerabilities inherent to the AI system lead to widespread breaches, exposing vast quantities of sensitive data and jeopardising national security or public safety.

The incident results in severe consequences, necessitating emergency responses, extensive system redesigns, and long-term efforts to recover from the breach and prevent recurrence.

Posing a reputational risk or undermining public confidence in the government

Isolated reputational issues arise, quickly addressed and explained.

Causes negligible damage to public trust in government capabilities.

Small-scale AI mishaps lead to brief public concern, slightly denting the government's reputation.

Prompt clarification and corrective measures minimize long-term impact on public confidence

Seen by the government as poor management.

Misapplications result in moderate public dissatisfaction and questioning of government oversight.

Requires remedial actions to mend trust and address concerns.

Seen by government and opposition as failed management.

Widespread public scepticism and criticism, majorly affecting the government's image.

Requires substantial efforts to rebuild public confidence through transparency, accountability, and improvement of AI governance.

High profile negative stories, seen by government and opposition as significant failed management.

Severe misuse or failure of AI systems leads to profound public distrust and criticism.

Significantly undermining confidence in government effectiveness and integrity.

Requires comprehensive, long-term strategies for rehabilitation of public trust, including systemic changes and ongoing engagement.

Seen by government and opposition as catastrophic failure of management.

Minister expresses loss of confidence or trust in agency.

 Version 2.0

Policy for the responsible use of AI in government

Need to know

This version of the policy (v2.0) is effective 15 December 2025. The first version (v1.1) took effect on 1 September 2024.

It applies to all non-corporate Commonwealth entities, with some exceptions.

Departments and agencies must meet the mandatory requirements for:

 

Explore the policy

Version 2.0

Policy aims

This policy aims to ensure that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations.

Embrace the opportunity

This policy aims to provide a unified approach to enable government to accelerate AI adoption and embrace the AI opportunity. It is designed to reduce barriers to government adoption by helping agencies confidently approach AI governance and implementation.

It aims to ensure agencies have the right settings in place to take advantage of the opportunities presented by AI and fully realise benefits such as increased and improved efficiency, accuracy and service delivery.

Strengthen public trust

This policy aims to strengthen public trust in government adoption of AI by positioning the Australian Government as an exemplar in safe and responsible AI use. 

It is designed to enable the responsible use of AI across government, through setting consistent requirements for transparency and accountability, and by requiring risk-based oversight of AI use cases.  

Adapt to change

This policy aims to embed a forward-leaning, adaptive approach for government’s use of AI that evolves as the technological and policy environment changes.

It supports agencies at different stages of their AI adoption journey and sets requirements that scale with the agency’s use of AI. 
 

Implementation

Application

This version of the policy (v2.0) is effective 15 December 2025. It replaces version v1.1 of the policy which came into effect 1 September 2024.

All non-corporate Commonwealth entities (NCEs), as defined by the Public Governance, Performance and Accountability Act 2013, must apply this policy.

Corporate Commonwealth entities are also encouraged to apply this policy.

National security carveouts

This policy does not apply to:

The NIC includes:

  • Office of National Intelligence (ONI)
  • Australian Signals Directorate (ASD)
  • Australian Security Intelligence Organisation (ASIO)
  • Australian Secret Intelligence Service (ASIS)
  • Australian Geospatial-Intelligence Organisation (AGO)
  • Defence Intelligence Organisation (DIO)
  • Australian Criminal Intelligence Commission (ACIC)
  • the intelligence role and functions of the Australian Transaction Reports and Analysis Centre (AUSTRAC), Australian Federal Police (AFP), the Department of Home Affairs and the Department of Defence.

Defence and members of the NIC may voluntarily adopt elements of this policy where they are able to do so without compromising national security capabilities or interests.

Existing frameworks

The challenges raised by government use of AI are complex and inherently linked with other considerations, such as the APS Code of Conduct, data governance, cyber security, privacy and ethics practices.

This policy has been designed to complement and strengthen – not duplicate – existing frameworks, legislation and practices that touch on government’s use of AI. 

This policy must be read and applied alongside existing frameworks and laws to ensure agencies meet all their obligations. 

Version 2.0

Principles

  • Adopt AI to enhance efficiency, decision-making, policy outcomes and government service delivery for the benefit of Australians.
  • Have clear accountabilities for the adoption of AI and understand its use.
  • Build public trust through transparency about government AI use.
     

Mandatory requirements

AI transparency statement

Agencies must make a publicly available statement outlining their approach to AI adoption and use, as prescribed under the Standard for transparency statements.

The statement must be reviewed and updated annually or sooner, should the agency make significant changes to its approach to AI.

Agencies must notify the DTA when they publish and make any changes to their AI transparency statement by emailing ai@dta.gov.au.
 

Strategic position on AI adoption

Agencies must develop a strategic position on AI adoption within 6 months of this policy taking effect. This position is to emphasise how AI opportunities can be identified and embraced by the agency.

Agencies must communicate their strategic position on AI to give staff clear direction on AI adoption. In line with their current and anticipated use of AI, agencies can develop a standalone AI strategy, augment an existing strategy or create other materials to communicate the approach to staff.
 

Accountable officials

Agencies must designate accountable official(s) to take accountability for implementing this policy.

Agencies must follow the Standard for accountability when designating accountable official(s) and implementing this requirement. The responsibilities of accountable officials are set in the standard.

Agencies must notify the DTA when they designate and make any changes to their accountable official(s) by emailing ai@dta.gov.au.
 

Accountable use case owners

Agencies must designate an accountable use case owner for each in-scope AI use case within 12 months of this policy taking effect. Accountable official(s) are to maintain a register of accountable use case owners.

Agencies must follow the Standard for accountability when implementing this requirement. The responsibilities of accountable use case owners are set in the standard.
 

Internal AI use case register

Agencies must create a register of in-scope AI use cases to enable accountable official(s) to record accountable use case owners within 12 months of this policy taking effect.

Agencies must share the register with the DTA every 6 months, commencing from when they create the register to meet the above requirement.

The Standard for accountability lists the minimum fields agencies must capture in the use case register. Agencies can add additional fields to meet their organisational needs. An existing register may be reused for the purposes of meeting this requirement. The standard also provides the instructions for how to share agency registers with the DTA.

Version 2.0

Preparedness and operations

The principles and requirements included in this section standardise key elements of AI governance that allow agencies to build AI capability and use AI responsibly.

Principles

  • Protect Australians from AI harms.
  • APS officers need to be able to explain, justify and take ownership of advice and decisions when using AI.
  • AI capability built for the long term.
  • Flexibility and adaptability to accommodate technological advances.
     

Mandatory requirements

Operationalise the responsible use of AI

Agencies must establish an approach to embed responsible AI practices within 12 months of this policy taking effect. This may vary according to the scale and scope of agency AI use.

At a minimum, the approach will provide an agency with:

  • a process for adopting AI use cases in line with the implemented actions of this policy, as well as the agency's enterprise risk management and governance approach.
  • a way to inform staff who are designing and implementing AI use cases about Australia's AI Ethics Principles.
  • a pathway for staff to report AI safety concerns, including AI incidents.
  • pathways for the public to report AI safety concerns, appropriate to the agency's AI use.
  • clear processes to address AI incidents aligned to their ICT incident management approach - incident remediation must be overseen by an appropriate governance body or senior executive and should be undertaken in line with any other legal obligations.

Agencies may modify existing policies, procedures and frameworks, or create new ones. Smaller agencies with minimal AI adoption could amend existing documentation and/or assign key personnel to guide staff on responsible AI adoption on an ad hoc basis. Agencies with greater AI adoption could create dedicated AI policies, procedures and/or frameworks to support responsible adoption. Accountable officials are responsible for deciding the appropriate approach for their agency.
 

Staff training on AI

Agencies must implement mandatory training for all staff on responsible AI use within 12 months of this policy taking effect. Agencies should consider the Guidance for staff training on AI and can use the AI fundamentals training module to meet the requirement. They can use the module as provided, modify it, or incorporate it into an existing training program based on their specific context and requirements. Alternatively, agencies can allow their staff to access the module directly through APSLearn.

Agencies should implement additional training for staff as required, in consideration of their roles and responsibilities. For example, additional training for those responsible for the procurement, development, training and deployment of AI systems.
 

AI technical standard

It is strongly recommended that agencies apply the AI technical standard for Australian Government. The standard is designed for Australian Government agencies adopting AI. It embeds the principles of fairness, transparency, and accountability into a set of technical requirements and guidelines.
 

AI procurement guidance

It is strongly recommended that agencies refer to the Guidance on AI procurement in government when procuring AI products and services. The guidance offers practical, step-by-step advice to help agencies identify and manage AI-specific risks while maintaining procurement best practices.  
 

Agencies should consider

Applying the generative AI guidance

Applying the Managing access to public generative AI tools guidance and the Using public generative AI tools safely and responsibly guidance.
 

Capability development

Developing staff AI capability to effectively use AI and comply with AI policy and regulation.

Version 2.0

AI use case impact assessment

The principles and requirements in this section intend to assess potential impacts of AI use cases and ensure additional oversight of higher risk AI.

Principles

  • Ongoing monitoring and evaluation of AI uses.
  • AI risk mitigation is proportionate and targeted.
  • AI use is lawful, ethical, responsible, transparent and explainable to the public.
     

Mandatory requirements

All new AI use cases

Agencies must assess all new AI use cases against the in-scope criteria (Appendix C) to determine if they are in scope of the policy. The assessment must be documented and take place during the design phase while developing requirements.

Agencies must begin AI use case assessments within 12 months of this policy taking effect.

For existing use cases not yet assessed, agencies must determine whether they are in scope of this policy and apply all relevant policy actions by 30 April 2027.

Where practicable, agencies should implement the requirements ahead of the deadlines listed above.
 

In-scope AI use cases

For AI use cases that are in-scope, agencies must conduct an AI use case impact assessment. Agencies are to commence an assessment at the design stage. Before the solution is deployed, agencies must finalise the assessment and apply any agreed risk treatments.

Agencies may conduct an AI use case impact assessment by using either:

  • the Australian Government AI impact assessment tool (the impact assessment tool)
  • an internal process that integrates all provisions of the impact assessment tool.

Where an agency integrates the tool, they must ensure:

  • the internal process is consistent
  • it delivers the same (or a higher) risk outcome for inherent and residual risk.

Agencies must be able to revise their internal process in response to any impact assessment tool updates.

Agencies must add each in-scope AI use case to their internal register of AI use cases and update it as required. Include risk rating and accountable use case owner changes. When deploying an in-scope AI use case, agencies must:

  • regularly monitor and evaluate their use case to ensure it is operating as intended and that risks are being effectively managed.
  • re-validate the AI use case impact assessment by checking its accuracy and updating it when there is a material change in the use case scope, usage or operation.

Agencies should also monitor changes that are not initiated by the agency. For example, vendor changes and changes in the regulatory environment. Agencies could also ask vendors to provide information on updates through contractual mechanisms.
 

Medium-risk AI use cases

If an agency determines their in-scope AI use case has an inherent medium-risk rating when completing an AI use case impact assessment, they should consider if the use case would benefit from being governed through a designated board or a senior executive. Agencies should choose an approach appropriate for the size and scope of the agency if they apply additional governance.
 

High-risk AI use cases

If an agency determines their in-scope AI use case has an inherent high-risk rating when completing an AI use case impact assessment, they must:

  • report the use case to the agency accountable official with the reasons for the inherent high-risk rating, proposed mitigations and residual risks
  • govern the use case through a designated board or a senior executive, whichever is appropriate for the size and scope of the agency.

Once an agency has decided to deploy the use case, they must:

  • report the use case to the DTA through the accountable official, see the Standard for accountability
  • establish a system to regularly review the use case every 12 months at a minimum. The review must report to the relevant governing board or senior executive on whether the use case is operating as intended and that risks are being effectively managed. The review must also consider the AI use case impact assessment and revisions to it, if required.
     

Out-of-scope AI use cases

For use cases assessed as out of scope of this policy, agencies may adopt the use case while ensuring they comply with relevant existing obligations, such as privacy and security.

If an agency adopts an out-of-scope AI use case, they must assess whether the use case becomes in-scope of this policy if there is a material change in the scope, usage or operation of the solution.

If a use case is in scope, agencies must follow any applicable actions in this policy.

Version 2.0

Appendix B: Definitions

Artificial intelligence

While there are various definitions of what constitutes AI, for the purposes of this policy agencies are to apply the definition provided by the Organisation for Economic Co-operation and Development (OECD):

Next page

Policy aims

Next page

2. Purpose and expected benefits

Version 2.0

Next page

3. Inherent risk assessment

Version 2.0

Next page

4. Threshold assessment outcome

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.