-
Statement 37: Establish monitoring framework
-
Statement 37: Establish monitoring framework
-
Agencies should
Criterion 124: Define reporting requirements.
This includes:
- establishing a plan for providing different stakeholders with reports
- for each group of stakeholders (persona), define what needs to be reported, why, when, and how.
Criterion 125: Define alerting requirements.
This includes:
- defining what information needs alerting
- defining what information is critical to be alerted in real-time
- defining severity levels, such as major, minor, warning
- defining thresholds, out-of-pattern behaviour, and other triggers for each alert level
- defining who needs to be alerted and the method of alert such as SMS or e-mail.
Criterion 126: Implement monitoring tools.
This includes:
- monitoring the information needed to satisfy alerting and reporting requirements
- automating monitoring, alerting, and reporting
- implementing management information and dashboards
- implementing role-based access to protect sensitive information and meet security requirements
- implementing real-time alerting requirements.
Criterion 127: Implement feedback loop to ensure that insights from monitoring are fed back into the development and improvement of the AI system.
This includes:
- a decision matrix outlining guidance on what components in the AI system would need an update or refresh, such as pre or post processing components, AI model, or a RAG knowledge base in a GenAI system
- a framework to provide and track recommended actions from the insights
- a guideline for identifying actions to address insights, with considerations to costs, delays, AI trust, and effectiveness.
-
Statement 38: Undertake ongoing testing and monitoring
-
Criterion 5 - Provide flexibility and choice
Create seamless experiences between service delivery channels, and provide flexibility and choice for how users engage with digital services.
Off -
Statement 38: Undertake ongoing testing and monitoring
-
Agencies must
Criterion 128: Test periodically after deployment and have a clear framework to manage any issues.
This assures that the system still operates as intended. See Test section for applicable tests.
Criterion 129: Monitor the system as agreed and specified in its operating procedures.
Ensure the operators understand when, why, and how to intervene.
Criterion 130: Monitor performance and AI drift as per pre-defined metrics.
Criterion 131: Monitor health of the system and infrastructure.
This includes:
- monitoring logs for errors
- services or processes
- resources such as compute, memory, storage, and network.
Criterion 132: Monitor safety.
This includes:
- monitoring inputs and outputs for abuse, misuse, sensitive information disclosure, and other forms of harm.
Criterion 133: Monitor reliability metrics and mechanisms.
This includes:
- error rates
- fault detection
- recovery
- redundancy
- failover mechanisms.
Criterion 134: Monitor human-machine collaboration.
This includes:
- reviewing the human experience
- assessing the effectiveness of the human oversight and control measures
- analysing the usage metrics for friction points and finding opportunities for improving the overall outcome from human-machine collaboration
- considering different monitoring methods. While surveys could be cost effective, face-to-face interviews and observing users live while interacting with the AI system could provide better insights.
Criterion 135: Monitor for unintended consequences.
This typically includes:
- implementing various channels for people to provide feedback, issues, or contest outcomes
- consider if anonymous channels are needed
- tracing how the outputs of the AI system are used
- analysing quantitative and qualitative data for recurring harms
- look for missing data, such as checking if certain demographics are not using the system.
Criterion 136: Monitor transparency and explainability.
Periodically check that transparency and explainability requirements are met post deployment.
Criterion 137: Monitor costs.
The cost model for using AI systems may be different and much more costly than traditional software and systems.
Criterion 138: Monitor security.
This may include logging AI services in use to satisfy security requirements and ensuring appropriate data loss prevention (DLP).
Identify the scope of deployment data for the AI system.
These include:
- data submitted by the user (prompts)
- agency data augmented into the prompts
- content generated by the service via completions, images, and embedding operations
- training and validation data from the department that will be used for fine-tuning a model.
DLP includes:
- ensuring that the supplier does NOT use agency data for improving the supplier's AI systems or other products
- ensuring the system only accesses data that the end-user is authorised to access
- ensuring that human review is performed by authorised users only
- monitoring for sensitive data disclosure
- monitoring data access and usage
- automating data classification
- monitoring for anomalies and suspicious activities
- ensuring data encryption is enabled for data-at-rest and data-in-transit
- ensuring that any data provided to the model, or generated by the model, can be deleted completely by the authorised user.
Criterion 139:Monitor compliance of the AI system.
-
Statement 39: Establish incident resolution processes
-
Statement 39: Establish incident resolution processes
-
Agencies must
Criterion 140:Define incident handling processes.
This involves establishing a structured process for incident management that ensures identified incidents are allocated a severity level and addressed promptly and effectively. This includes security incident, reporting, and monitoring.
This must comply with the Australian Government Protective Security Policy Framework (PSPF) and the Information security manual (ISM).
Criterion 141: Implement corrective and preventive actions for incidents.
This includes:
- defining clear protocols for root cause analysis, implementing corrective actions, and preventive actions
- maintaining detailed logs and documentation to facilitate troubleshooting, provide input into longer term problem management, and assist continuous improvement of AI systems.
-
Statement 40: Create a decommissioning plan
-
Statement 40: Create a decommissioning plan
-
Agencies must
Criterion 142: Define the scope of decommissioning activities.
Decommissioning plans clearly identify the system components being shut down, disabled, reused, or repurposed and the reason for decommissioning.
Ensure compliance with Information management for records created using Artificial Intelligence (AI) technologies | naa.gov.au.
Criterion 143: Conduct an impact analysis of decommissioning the target AI system.
Assessing the potential impacts on an agency’s business operations, stakeholders and compliance obligations allows for the identification of dependencies, risks and any alternative solutions required to maintain service continuity.
Criterion 144: Proactively communicate system retirement.
This involves:
- informing all affected parties – employees, partners, and users about the decommissioning schedule, reasons for decommissioning, and any expected impacts
- addressing any issues or concerns
- providing support and information about alternative systems, and any forecast implementation schedules
- considering a retrospective review of the AI system's performance and lifecycle to provide valuable insights for future projects. The review should involve analysing the system's successes, challenges, and areas for improvement.
-
Statement 41: Shut down the AI system
-
Statement 41: Shut down the AI system
-
-
-
Agencies must
Criterion 145: Retain AI system compliance records.
Any records related to an AI system, including those generated during retirement, must be preserved for agencies to demonstrate compliance and effectively respond to future audits and inquiries.
Criterion 146: Disable computing resources or components specifically dedicated to the AI system.
Criterion 147: Securely decommission or repurpose all computing resources dedicated to the AI system, including individual and shared components.
This involves:
- systematically shutting down and wiping servers, storage devices, and network components
- identifying which system or network components will be repurposed, and disconnecting or reconfiguring accordingly
- terminating instances and services associated with cloud resources, ensuring no data remains.
-
Statement 42: Finalise documentation and reporting
-
Statement 42: Finalise documentation and reporting
-
Agencies must
Criterion 148: Finalise decommissioning information and update organisational documentation.
This involves:
- recording all decommissioning information in the final system documentation, compiling records of decommissioning activities, decisions, and lessons learnt
- delivering a final report to relevant stakeholders
- providing all documentation related to the decommissioning of the AI system or components, including detailed accounts of:
- the decommissioning process
- compliance adherence
- implications for ongoing operations.
-
If you are outside of Australia
We have developed a version of this training that is intended to not be limited by location.
However, as this version of the training module was originally developed for and by the Australian Government to support the implementation of the Policy for the responsible use of AI in government, it still refers to some Australian specific resources that may not be applicable in your jurisdiction.
Off -
-
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.