Standard for AI transparency statements
Supporting the policy for responsible use of AI in government
Use the following information to support your agency's implementation of the policy for responsible use of AI in government.
Your responsibilities
Under the policy, agencies must make a publicly available statement that outlines their approach to AI adoption, as directed by the Digital Transformation Agency (DTA).
Agencies must follow this standard, which sets the direction for AI transparency statements including expectations and formatting. It establishes a consistent format and expectation for AI transparency statements in the Australian Government. Clear and consistent transparency statements build public trust and make it easier to understand and compare how government agencies adopt AI.
At a minimum, agencies must provide the following information regarding their use of AI in their transparency statement:
- the intentions behind why the agency uses AI or is considering its adoption
- classification of AI use according to usage patterns and domains, as listed at Attachment A
- classification of use where the public may directly interact with, or be significantly impacted by, AI or its outputs without human review
- measures to monitor the effectiveness of deployed AI systems and protect the public against negative impacts
- overview of compliance with the requirements under the Policy for responsible use of AI in government
- compliance with applicable legislation and regulation
- when the statement was most recently updated.
Statements must use clear, plain language1 that avoids technical jargon and is consistent with the Australian Government Style Manual. They must also provide or direct to a contact email for further public enquiries.
Agencies must publish transparency statements on their public facing website. It's recommended that a link to the statement is placed in a global menu, aligned to the approach often taken for privacy policies.
Transparency statements must be reviewed and updated at these junctures:
- at least once a year
- when making a significant change to the agency's approach to AI
- when any new factor materially impacts the existing statement's accuracy.
How to apply
Implementing the AI transparency statements
The policy provides a coordinated approach for the use of AI across the Australian Government. It builds public trust by supporting the Australian Public Service (APS) to engage with AI in a responsible way.
Transparency is critical to public trust and is an important aim of the policy and broader APS Reform agenda2. The public should have confidence that agencies monitor the effectiveness of deployed AI systems and have measures to protect against negative impacts.
AI transparency statements help agencies meet these aims by providing a foundational level of transparency on their use of AI. They publicly disclose:
- how AI is used and managed by the agency
- a commitment to safe and responsible use
- compliance with the policy.
Agency transparency statements are intended to provide a high-level overview of agency AI use and management in line with the policy intent.
Agencies are not required to list individual use cases or provide use case level detail. However, agencies may choose to provide detail beyond the requirements to publicly explain their approach to AI.
Agencies must send the DTA a link to the statement when it is published or updated by emailing ai@dta.gov.au.
Questions about implementation
Accountable officials can contact the DTA with questions about implementing the transparency statements by emailing ai@dta.gov.au.
Footnotes
- Australian Government Style Manual, Plain language and word choice ↩
- APS Reform, Priority one: An APS that embodies integrity in everything it does. "The APS acts with integrity and fairness and is accountable and transparent in everything it does. This will build public trust and strengthen standards of integrity in our federal government." https://www.apsreform.gov.au/about-aps-reform/our-focus-areas ↩