• The lifecycles statements 

  • Whole of AI lifecycle: statements 1 - 8

  • Whole of AI lifecycle includes statements that apply across multiple AI product lifecycle stages, for ease of use and to minimise content duplication.

  • The challenges for government use of AI are complex and linked with other governance considerations, such as:

    • the APS Code of Conduct
    • data governance
    • cyber security
    • ICT infrastructure
    • privacy
    • sourcing and procurement
    • copyright
    • ethics practices.

    Across the lifecycle stages, agencies should consider:

    • technology operations – to ensure compliance, efficiency, and ethical standards
    • reference architecture – to provide structured frameworks that guide the design, development, and management of AI solutions
    • people capabilities – having the specialised skills required for successful implementation
    • auditability – enabling external scrutiny, supporting transparency, and accountability
    • explainability – identifying what needs to be explained and when, making complex AI processes transparent and trustworthy
    • system bias – maintaining the role of positive bias in delivering meaningful outcomes, while mitigating the source and impacts of problematic bias
    • version control – tracking and managing changes to information to inform stakeholder decision-making
    • watermarking – to embed visual or hidden markers into generated content so that its creation details can be identified.
  • Notes: 

  • Statement 1

  • Secretaries Digital and Data Committee communique 

    Date: 19 June 2025  

    Strategic Discussion: Strengthening Cyber Security and Building Resilience

    Australian Signals Directorate (ASD) Site Tour

    Members toured the classified operations floor.

    Strengthening Cyber Security and Building Resilience

    ASD and the Department of Home Affairs jointly led a discussion on high-level  threats and the cybersecurity uplift and hardening required to address risks.

    Australian Public Service (APS) Digital Skills Program (Pilot) – Discovery findings and pilot proposal

    Services Australia, in partnership with the Australian Public Service Commission (APSC), established a Whole-of-Government (WofG) Multi-Disciplinary Team (MDT) to undertake discovery work which informed development of a pilot proposal for a campus approach to uplift APS digital skills. Members also endorsed this proposal.

    Adoption of GenAI in Government

    Members discussed the AI in Government Action Plan initiative and current state of AI adoption across the APS, including the importance of leadership in driving confidence, capability and shared solutions across government.

    myGov Investment Pipeline

    The Committee noted and discussed the progress related to the myGov Investment Pipeline, agreed by Government in the 2024-25 Budget, with detail on the initial myGov Investment pipeline initiatives and future opportunities. The Committee was provided an update on the inaugural myGov Strategic Committee meeting, attended by 18 agencies across the Australian Government, held on 16 May 2025. 

    The date for the next SDDC meeting is 25 September 2025.

  • Downloadable resource

    SDDC Communique 19 June 2025

  • Statements

  • Design: statements 9 - 12

  • Designing AI systems that are effective, efficient, and ethical involves being clear on the problem, understanding the impacts of technical decisions, taking a design approach with humans at the centre and having a clear definition of success.

    In the design stage agencies consider how the AI system will operate with and impact existing processes, people, data, and technology. This includes considering potential malfunctions and harms.

    Without appropriate design an AI system could:

    • cause harm due to incorrect information,  caused by AI hallucinations, false positives, or false negatives
    • be used beyond their purpose
    • perpetuate existing injustices
    • be misused, misunderstood, or abused
    • be susceptible to malfunctions of another interacting system
    • experience behaviour and performance issues caused by other external factors.

    At the design stage agencies also determine the performance and reliability measures relevant to their AI system’s tasks. Considerations when selecting metrics include business, performance, safety, reliability, explainability, and transparency.
     

  • The design stage includes concept development, requirements engineering, and solution design.

  • Services not covered by the Digital Service Standard

    Agencies are recommended to apply the Digital Service Standard to existing staff facing services, though these services are not mandated.

    The Digital Service Standard does not apply to:

    • state, territory or local government services
    • personal ministerial websites that contain material on a minister’s political activities or views on issues not related to their ministerial role.

    State, territory or local government and third parties may choose to apply the Digital Service Standard to improve access and discoverability of their digital services.

    Some services may request an exemption from the Digital Service Standard. See the Exemptions section below.

  • Notes:



  • Data: statements 13 - 19

  • The data stage involves establishing the processes and responsibilities for managing data across the AI lifecycle. This stage includes data used in experimenting, training, testing, and operating AI systems.

  • Data used by an AI system can be classified into development and deployment data.

    Development data includes all inputs and outputs (and reference data for GenAI) used to develop the AI system. The dataset is made up of smaller datasets – train dataset, validation dataset, and test dataset.

    • Train dataset – this dataset is used to train the AI system. The AI system learns patterns in the train dataset. The train dataset is the largest subset of the modelling dataset. For GenAI, the train dataset may also include reference or contextual datasets such as retrieval-augmented generation (RAG) datasets and prompt datasets
    • Validation dataset – this dataset is used to evaluate the model's performance during model training. It is used to fine-tune and select the best-performing model, such as through cross validation
    • Test dataset – this dataset is used to evaluate the final model's performance on previously unseen data. This dataset helps provide unbiased evaluation of model performance.

    Deployment data includes AI system inputs such as live production data, user input data, configuration data, and AI system outputs such as predictions, recommendations, classifications, logs, and system health data. Deployment stage inputs are new and previously unseen by the AI system. 

    The performance of an AI system is dependent on robust management of data quality and the availability of data. 

    Key workstreams within this stage include: 

    • data orchestration – establishing central oversight of and planning the flow of data to an AI system from across datasets
    • data transformation – converting and optimising data for use by the AI system
    • feature engineering – methods to improve AI model training to better identify and learn patterns in the data
    • data quality – measuring dimensions of a dataset associated with greater performance and reliability
    • data validation – testing the consistency, accuracy, and reliability of the data to ensure it meets the requirements of the AI system
    • data integration and fusion – combining data from multiple sources to synchronise the flow of data to the AI system
    • data sharing – promoting reuse, reducing resources required for collection and analysis, and helping to build interoperability between systems and datasets
    • model dataset establishment – using real-world production data to build, refine, and contextualise a high-quality AI model.
       

  • Notes: 

  • Train: statements 20 - 25

  • The train stage covers the creation and selection of models and algorithms. The key activities in this stage include modelling, pre- and post-processing, model refinements, and fine-tuning. It also considers the use of pre-trained models and associated fine-tuning for the operational context.

  • Exemptions

    The DTA acknowledge that some agencies may be unable to meet one or more of the criteria set out by the Digital Service Standard due to a range of circumstances. These circumstances may include but are not limited to:

    • legacy technology barriers that the agency cannot reasonably overcome
    • substantial financial burden caused by changing a service to meet criteria.

    Exemptions may be granted for one or more of the criteria set out by the Digital Service Standard. This will be assessed on a case-by-case basis. Exemptions must be applied for through the DTA.

    Further information can be found in the Digital Experience Policy Exemption Guide.

    Note: Even if a service or website is not covered by the Digital Service Standard, or an exemption is received, obligations may still apply under relevant Australian legislation, for example accessibility requirements under the Disability Discrimination Act 1992.

    Off

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.