• Generative AI guidance

    The interim guidance on government use of public generative AI tools helps agencies and their staff make responsible choices about when and how they use generative AI tools.

    The guidance is presently hosted on the Australian Government Architecture.

  • National framework for the assurance of artificial intelligence in government

    A joint approach to safe and responsible AI by the Australian, state and territory governments.

    The framework is hosted on the Department of Finance website.

  • Australia’s AI Ethics Principles

    Australia’s 8 Artificial Intelligence (AI) Ethics Principles are designed to ensure AI is safe, secure and reliable.

    The principles inform the Australian Government's exemplar use of AI and are hosted on the Department of Industry, Science and Resources website.

  • Safe and responsible AI in Australia consultation

    Discussion paper, public submissions and the government's interim response to its consultation on safe and responsible AI in Australia.

    These materials are hosted on the Department of Industry, Science and Resources website.

  • External resources

    To support the Australian Government's responsible use of AI, agencies and their staff are encouraged to familiarise with and make use of the below resources.

  • Register for updates

    Sign up to the DTA's newsletter to receive updates on all things AI in government.

  • This risk matrix was developed to help agencies identify new, high-risk use cases under the responsibilities of accountable officials. The policy does not prescribe risks that agencies should be assessing or the system used to determine the final risk outcomes. 

  • This risk matrix was developed to help agencies identify new, high-risk use cases and support accountable officials to fulfil their policy responsibilities.

    The policy does not prescribe risks that agencies should be assessing or the system used to determine the final risk outcomes. 

  • Resources for AI in government

  • This policy aims to ensure that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations.

  • Your responsibilities

    Agencies must designate accountability for implementing the policy to accountable official(s) (AOs), who must: 

    • be accountable for implementation of the policy within their agencies
    • notify the Digital Transformation Agency (DTA) where the agency has identified a new high-risk use case by emailing ai@dta.gov.au
    • be a contact point for whole-of-government AI coordination
    • engage in whole-of-government AI forums and processes
    • keep up to date with changing requirements as they evolve over time.

    The policy does not make AOs responsible for the agency’s AI use cases however an agency may decide to apply additional responsibilities to their chosen AOs.

  • Guidance for staff training on AI

    Guidance for providing staff with training on AI is available at the link below.

  • Download the policy

    Download a PDF of the policy for responsible use of AI in government.

  • Download the standard

    Download a PDF of the standard for accountable officials.

  • ""

    Policy for the responsible use of AI in government

    A framework to position the Australian Government as an exemplar for safe and responsible use of artificial intelligence.

  • Standard for AI transparency statements

    Version 1.1

    Use the following information to support your agency’s implementation of the policy for responsible use of AI in government.

  • Your responsibilities

    Under the policy, agencies must make publicly available a statement outlining their approach to AI adoption as directed by the Digital Transformation Agency (DTA).

    This standard provides the DTA’s direction which agencies must follow. It establishes a consistent format and expectation for AI transparency statements in the Australian Government. Clear and consistent transparency statements build public trust and make it easier to understand and compare how government agencies adopt AI.

    Agencies must provide the following information regarding their use of AI in their transparency statement:

    • the intentions behind why the agency uses AI or is considering its adoption
    • classification of AI use according to usage patterns and domains
    • classification of use where the public may directly interact with, or be significantly impacted by, AI without a human intermediary or intervention
    • measures to monitor the effectiveness of deployed AI systems, such as governance or processes
    • compliance with applicable legislation and regulation
    • efforts to identify and protect the public against negative impacts
    • compliance with each requirement under the Policy for responsible use of AI in government
    • when the statement was most recently updated.

    Statements must use clear, plain language that is consistent with the Australian Government Style Manual and avoid technical jargon. They must also provide or direct to a public contact email for further enquiries.

    Agencies must publish transparency statements on their public facing website. It’s recommended that a link to the statement is placed in a global menu, aligned to the approach often taken for privacy policies.

    Transparency statements must be reviewed and updated at these junctures:

    • at least once a year
    • when making a significant change to the agency’s approach to AI
    • when any new factor materially impacts the existing statement’s accuracy.
  • How to apply

    Implementing the AI transparency statements

    The policy provides a coordinated approach for the use of AI across the Australian Government. It builds public trust by supporting the Australian Public Service (APS) to engage with AI in a responsible way.

    Transparency is critical to building public trust and is an important aim of the policy and broader APS Reform agenda. The public should have confidence that agencies monitor the effectiveness of deployed AI systems and have measures to protect against negative impacts.

    AI transparency statements help agencies to meet these aims by providing a foundational level of transparency on their use of AI. They publicly disclose:

    • how AI is used and managed by the agency
    • a commitment to safe and responsible use
    • compliance with the policy.

    Agency responses to the required information are intended to provide a high-level overview of agency AI use and management in line with the policy intent. 

    Agencies are encouraged to stocktake individual use cases to determine their classification of AI use. They are not required to list individual use cases or provide use case level detail. However, agencies may choose to provide detail beyond the requirements to publicly explain their approach to AI.

    The agency’s accountable officials should provide the DTA with a link to the statement when it is published or updated by emailing ai@dta.gov.au

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.