• Statement 2: Define the reference architecture

  • Services covered by the Digital Service Standard

    The Service Standard is mandatory and applies to informational and transactional digital services that are: 

    • subject to the requirements of the Investment Oversight Framework (IOF)
    • new or replacement public-facing services
    • new staff-facing services
    • all existing public-facing services.
       
  • The use of a reference architecture provides a structured framework that guides the design, development, and management of an AI system.

    Agencies must:

    • Criterion 4: Evaluate existing reference architectures.

      Make use of the Australian Government Architecture to:

      • consider reusing pretrained models when applicable
      • consider whether to build in-house or use off-the-shelf software or services
      • ensure strategic alignment with government's digital direction
      • ensure consistency and interoperability across agencies.

    Agencies should:

    • Criterion 5: Monitor emerging reference architectures to evaluate and update the AI system.

      New architectural paradigms are emerging that address complex AI applications, including:

      • Large Language Model (LLM) architectures: These architectures focus on deploying and managing large-scale language models. They encompass systems, tools, and design patterns that facilitate the integration of LLMs into applications, ensuring scalability and efficiency.
      • AI infrastructure architectures: Conceptualised to streamline the production of AI models, AI factories provide comprehensive guidelines for building high-performance, scalable, and secure data centres dedicated to AI development. These architectures support the end-to-end lifecycle of AI system creation, from development to deployment.
      • Generative AI (GenAI) reference architecture: This architecture outlines interfaces and components for GenAI applications, enabling users to interact with AI systems effectively. It emphasises modularity and flexibility, allowing for the integration of various AI functionalities to meet diverse user needs.
  • Statement 3: Identify and build people capabilities

  • Statement 3: Identify and build people capabilities

  • Statement 3: Identify and build people capabilities

  • Use case assessment

    The standard was assessed against a selection of use cases across government agencies.  Outcomes were collated to identify how the standard can be used across each lifecycle stage.

    The assessment considered:

    • proof of concept to those in operation
    • the nature of the applications, whether used by internal staff or public facing
    • the type of data involved, whether private, public, or a combination of both
    • the risk level of the applications, ranging from low to high.

    The applicability of the standard varied, based on who built each part of the AI system:

    1. Fully built and managed in-house: Involves building AI systems from scratch.
    2. Partially built and fully managed in-house: This includes using pre-trained or off-the-shelf models with or without grounding, RAG, and prompt engineering, such as large language models (LLMs) or reusing existing pre-trained machine learning or computer vision models. Note that fine-tuning a model would transfer the responsibility of applying the standard from the vendor to the agency.
    3. Largely built and managed externally: Sourcing or procuring an AI system or SaaS product that is managed by a third-party or an external provider, such as Copilot.
    4. Incidental usage of AI: Using off-the-shelf software with AI as incidental feature.
      Examples include:
      • AI features built into desktop software such as grammar checks
      • internet search with AI functionality

    Applicability of the statements in the standard was tested against each AI use case. The process determined whether the standard could be applied to the use case or not. In some cases, such as when a pre-trained model is used, the applicability may be conditional. This means that the applicability depends on the use case, vendor responsibility, and how AI is integrated into the environment. 

    Applicability of the standard has been categorised as:

    • Applicable: The statements in the standard fully apply to the use case.
    • Conditional: The statements in the standard are applicable, but their implementation may require agreement with third-party providers or rigorous testing and monitoring. For example, when using GenAI without fine-tuning or grounding, parts of the standard will be implemented by the provider.
    • N/A (not applicable): The use case falls outside the scope of the standard, and therefore the statements do not apply.

    The following table shows the applicability of the standard against each lifecycle phase:

    PhaseBuilt and managed in-house Partially built and fully managed in-house Largely built and managed externallyIncidental usage of AI
    Whole of AI LifecycleApplicableApplicableApplicableN/A
    DesignApplicableApplicableApplicableN/A
    DataApplicableConditionalConditional N/A
    TrainApplicableConditionalConditionalN/A
    EvaluateApplicableApplicableConditionalN/A
    IntegrateApplicableApplicableConditionalN/A
    DeployApplicableApplicableConditionalN/A
    MonitorApplicableApplicableApplicableN/A
    DecommissionApplicableApplicableApplicableN/A
  • Informational services

    Informational services provide information to users, such as reports, fact sheets or videos. They may include: 

    • government agency websites
    • smart answers and virtual assistants
    • e-learning
    • publications
    • online libraries
    • databases and data warehouses. 
    Off

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.