
The Digital Seller Underperformance Policy (DSUP) consultation process has now closed.
- 9 February
Supporting the policy for responsible use of AI in government
The information in this standard supports your agency's implementation of the policy for responsible use of AI in government. It covers Accountable Officials, Accountable Use Case Owners and Internal Use Case Registers.
Agencies must designate accountability for implementing the policy to accountable official(s) (AOs), who must:
An agency may decide to apply additional responsibilities to their chosen AOs.
Agencies may choose AOs who suit the agency context and structure.
The responsibilities may be vested in an individual or in the chair of a body. The responsibilities may be split across officials or existing roles to suit agency preferences. For example, Chief Information Officer, Chief Technology Officer or Chief Data Officer.
Implementation of the policy is not solely focused on technology, so AOs may also be selected from business or policy areas. AOs should have the authority and influence to effectively drive the policy's implementation in their agency.
Agencies may choose to share AO responsibilities across multiple leadership positions.
Agencies must notify the DTA of AO selection including the contact details of all their AOs at initial selection and when the accountable roles change. Notify DTA by emailing ai@dta.gov.au.
AOs are accountable for their agency's implementation of the policy. Agencies should implement the entire policy as soon as practical, considering the agency's context, size and function.
The mandatory actions set out in the policy must be implemented within the specified timelines.
The policy provides a coordinated approach for the use of AI across the Australian Government. It builds public trust by supporting the Australian Public Service (APS) to engage with AI in a responsible way. AOs should assist in delivering its aims by:
AOs should also consider the following activities:
In line with the Standard for Transparency Statements, agencies must provide the DTA with a link to their agency transparency statement each time it is updated, by emailing ai@dta.gov.au.
In the event their agency has decided to deploy a new use case with an inherent high risk rating, AOs must notify the DTA, by emailing ai@dta.gov.au.
AOs must also notify the DTA when an existing AI use case has been re-assessed as having an inherent high risk, or when use case is no longer high risk.
The notification should include:
This is not intended to prevent agencies from adopting the use case. Instead, it will help government develop risk mitigation approaches and maintain a whole-of-government view of high-risk use cases.
At times, the DTA will need to collect information and coordinate activities across government to mature the whole-of-government approach and policy.
AOs are the primary point of contact within their agency. They must respond to DTA requests for information and facilitate connection to the appropriate internal areas for information collection and agency participation in these activities.
AOs must participate in, or nominate a delegate for, whole-of-government forums and processes which support collaboration and coordination on current and emerging AI issues. These forums will be communicated to AOs as they emerge.
The policy will evolve as technology, leading practices and the broader regulatory environment mature. While the DTA will communicate changes, AOs should keep themselves and stakeholders in their agency up to date on:
AOs can contact the DTA with questions about policy implementation by emailing ai@dta.gov.au.
Agencies must ensure each AI use case that is in scope of the policy (see Appendix C) has designated accountability registered with the agency's AO(s) as an accountable use case owner. Accountable use case owners must:
Accountable use case owners should ensure records demonstrate transparency and accountability in the design, development, deployment and monitoring of AI systems related to their use case. This may include establishing documentation and traceability of decisions and changes, ensuring information accessibility and availability to assist with audits, and ensuring explainability of technical and non-technical information.
In setting the role of accountable use case owner, accountability can:
Accountable use case owners should either have or be able to access appropriate skills and expertise to identify risks and emerging issues related to their AI use case. Accountable use case owners must also be familiar with Australia's AI Ethics Principles, the Australian Government Impact Assessment Tool and the policy.
Accountable use case owners of high-risk use cases must have the ability to identify and manage risks and emerging issues.
Accountable use case owner actions can be delegated to other suitable staff.
Agencies must create a register of AI use cases that are in scope of the policy to enable registration of an accountable use case owner with accountable official(s). At a minimum, the register must include the following fields:
For AI use cases with an inherent high-risk rating, the register must also include:
Agencies should ensure that use case register are up to date via periodic review.
Agencies must share the register with the DTA every 6 months, commencing from when they create the register to meet the policy requirement. They can share the register by emailing ai@dta.gov.au or through a method pre-agreed with the DTA. The DTA may update the required method of submission during policy implementation.
Supporting the policy for responsible use of AI in government
Use the following information to support your agency's implementation of the policy for responsible use of AI in government.
Under the policy, agencies must make a publicly available statement that outlines their approach to AI adoption, as directed by the Digital Transformation Agency (DTA).
Agencies must follow this standard, which sets the direction for AI transparency statements including expectations and formatting. It establishes a consistent format and expectation for AI transparency statements in the Australian Government. Clear and consistent transparency statements build public trust and make it easier to understand and compare how government agencies adopt AI.
At a minimum, agencies must provide the following information regarding their use of AI in their transparency statement:
Statements must use clear, plain language1 that avoids technical jargon and is consistent with the Australian Government Style Manual. They must also provide or direct to a contact email for further public enquiries.
Agencies must publish transparency statements on their public facing website. It's recommended that a link to the statement is placed in a global menu, aligned to the approach often taken for privacy policies.
Transparency statements must be reviewed and updated at these junctures:
The policy provides a coordinated approach for the use of AI across the Australian Government. It builds public trust by supporting the Australian Public Service (APS) to engage with AI in a responsible way.
Transparency is critical to public trust and is an important aim of the policy and broader APS Reform agenda2. The public should have confidence that agencies monitor the effectiveness of deployed AI systems and have measures to protect against negative impacts.
AI transparency statements help agencies meet these aims by providing a foundational level of transparency on their use of AI. They publicly disclose:
Agency transparency statements are intended to provide a high-level overview of agency AI use and management in line with the policy intent.
Agencies are not required to list individual use cases or provide use case level detail. However, agencies may choose to provide detail beyond the requirements to publicly explain their approach to AI.
Agencies must send the DTA a link to the statement when it is published or updated by emailing ai@dta.gov.au.
Accountable officials can contact the DTA with questions about implementing the transparency statements by emailing ai@dta.gov.au.
Version 2.0
Version 2.0
Use the following information to support your agency's implementation of the policy for responsible use of AI in government.
The policy recognises that AI is used in many areas of the APS and everyday life. As adoption grows, staff at all levels will interact with AI and its outputs, directly or indirectly.
The policy requires agencies to implement mandatory training for all staff on responsible and ethical AI use, regardless of their role. Agencies should also consider if it is appropriate for staff to complete annual refresher training.
Foundational training focused on responsible and ethical AI use should address the following learning outcomes:
The training must align with the Policy for the responsible use of AI in government, Australia’s AI Ethics Principles and expectations such as Australia’s Public Service Values and Code of Conduct.
The Digital Transformation Agency (DTA) has developed AI fundamentals training for agency use. This training meets the learning outcomes described above for foundational AI training and can be used to satisfy the policy requirement for staff training.
The training will be updated periodically to address changes in the AI landscape, such as evolutions in the technology and its use in government.
As staff will likely interact with generative AI now and in the future, it is an area of focus for the training.
The training is designed for all staff regardless of their experience using AI. It takes approximately 20 to 30 minutes to complete.
It does not cover advanced topics such as model training, system development or instructions for specific technologies or platforms.
Agencies’ Learning and Development specialists can access the training for download through the APS Learning Bank. The training is provided in a format compatible with most e-learning platforms. Alternatively, agencies can choose to let their staff access the module directly through APSLearn.
Agencies can use the training module as provided, or choose to modify it or incorporate it into an existing training program based on their specific context and requirements.
Agencies are encouraged to consider additional training for staff in consideration of their roles and responsibilities, such as those responsible for the procurement, development, training and deployment of AI systems.
Where an agency implements the training, accountable officials should monitor completion rates and provide this information if requested by the DTA. This is in line with the activities to measure the implementation of the policy under the Standard for accountability.
Explore the principles and requirements of the policy.
For the purposes of this policy, agencies should apply the definition of AI provided by the Organisation for Economic Co-operation and Development (OECD) and the following definition of AI use case:
Definitions and how to apply them – including an optional approach to group AI use cases for some general-purpose AI solutions – is available in Appendix B. The appendix also defines an AI incident.
This policy provides implementation timeframes for agencies to meet some of its requirements. While agencies may need this time to action requirements, agencies should implement them sooner if practicable. Agencies could consider putting in place interim processes and building out their approach as they reach the specified implementation deadline.
This policy specifies actions that apply at the use case level. AI use cases in scope of this policy (referred to as in-scope AI use cases) are use cases that meet any criteria in Appendix C.
In addition to the criteria, the appendix lists areas of AI use to consider that are not automatically high risk, but are more likely to involve risks that require careful attention through an impact assessment. It also provides information on how to apply the policy for agencies experimenting with AI.