Preparedness and operations

The principles and requirements included in this section standardise key elements of AI governance that allow agencies to build AI capability and use AI responsibly.

Principles

  • Protect Australians from AI harms.
  • APS officers need to be able to explain, justify and take ownership of advice and decisions when using AI.
  • AI capability built for the long term.
  • Flexibility and adaptability to accommodate technological advances.
     

Mandatory requirements

Operationalise the responsible use of AI

Agencies must establish an approach to embed responsible AI practices within 12 months of this policy taking effect. This may vary according to the scale and scope of agency AI use.

At a minimum, the approach will provide an agency with:

  • a process for adopting AI use cases in line with the implemented actions of this policy, as well as the agency's enterprise risk management and governance approach.
  • a way to inform staff who are designing and implementing AI use cases about Australia's AI Ethics Principles.
  • a pathway for staff to report AI safety concerns, including AI incidents.
  • pathways for the public to report AI safety concerns, appropriate to the agency's AI use.
  • clear processes to address AI incidents aligned to their ICT incident management approach - incident remediation must be overseen by an appropriate governance body or senior executive and should be undertaken in line with any other legal obligations.

Agencies may modify existing policies, procedures and frameworks, or create new ones. Smaller agencies with minimal AI adoption could amend existing documentation and/or assign key personnel to guide staff on responsible AI adoption on an ad hoc basis. Agencies with greater AI adoption could create dedicated AI policies, procedures and/or frameworks to support responsible adoption. Accountable officials are responsible for deciding the appropriate approach for their agency.
 

Staff training on AI

Agencies must implement mandatory training for all staff on responsible AI use within 12 months of this policy taking effect. Agencies should consider the Guidance for staff training on AI and can use the AI fundamentals training module to meet the requirement. They can use the module as provided, modify it, or incorporate it into an existing training program based on their specific context and requirements. Alternatively, agencies can allow their staff to access the module directly through APSLearn.

Agencies should implement additional training for staff as required, in consideration of their roles and responsibilities. For example, additional training for those responsible for the procurement, development, training and deployment of AI systems.
 

AI technical standard

It is strongly recommended that agencies apply the AI technical standard for Australian Government. The standard is designed for Australian Government agencies adopting AI. It embeds the principles of fairness, transparency, and accountability into a set of technical requirements and guidelines.
 

AI procurement guidance

It is strongly recommended that agencies refer to the Guidance on AI procurement in government when procuring AI products and services. The guidance offers practical, step-by-step advice to help agencies identify and manage AI-specific risks while maintaining procurement best practices.  
 

Agencies should consider

Applying the generative AI guidance

Applying the Managing access to public generative AI tools guidance and the Using public generative AI tools safely and responsibly guidance.
 

Capability development

Developing staff AI capability to effectively use AI and comply with AI policy and regulation.

Next page

AI use case impact assessment

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.