Criterion 124: Define reporting requirements.
This includes:
Criterion 125: Define alerting requirements.
This includes:
Criterion 126: Implement monitoring tools.
This includes:
Criterion 127: Implement feedback loop to ensure that insights from monitoring are fed back into the development and improvement of the AI system.
This includes:
Criterion 128: Test periodically after deployment and have a clear framework to manage any issues.
This assures that the system still operates as intended. See Test section for applicable tests.
Criterion 129: Monitor the system as agreed and specified in its operating procedures.
Ensure the operators understand when, why, and how to intervene.
Criterion 130: Monitor performance and AI drift as per pre-defined metrics.
Criterion 131: Monitor health of the system and infrastructure.
This includes:
Criterion 132: Monitor safety.
This includes:
Criterion 133: Monitor reliability metrics and mechanisms.
This includes:
Criterion 134: Monitor human-machine collaboration.
This includes:
Criterion 135: Monitor for unintended consequences.
This typically includes:
Criterion 136: Monitor transparency and explainability.
Periodically check that transparency and explainability requirements are met post deployment.
Criterion 137: Monitor costs.
The cost model for using AI systems may be different and much more costly than traditional software and systems.
Criterion 138: Monitor security.
This may include logging AI services in use to satisfy security requirements and ensuring appropriate data loss prevention (DLP).
Identify the scope of deployment data for the AI system.
These include:
DLP includes:
Criterion 139:Monitor compliance of the AI system.
Criterion 140:Define incident handling processes.
This involves establishing a structured process for incident management that ensures identified incidents are allocated a severity level and addressed promptly and effectively. This includes security incident, reporting, and monitoring.
This must comply with the Australian Government Protective Security Policy Framework (PSPF) and the Information security manual (ISM).
Criterion 141: Implement corrective and preventive actions for incidents.
This includes:
Criterion 142: Define the scope of decommissioning activities.
Decommissioning plans clearly identify the system components being shut down, disabled, reused, or repurposed and the reason for decommissioning.
Ensure compliance with Information management for records created using Artificial Intelligence (AI) technologies | naa.gov.au.
Criterion 143: Conduct an impact analysis of decommissioning the target AI system.
Assessing the potential impacts on an agency’s business operations, stakeholders and compliance obligations allows for the identification of dependencies, risks and any alternative solutions required to maintain service continuity.
Criterion 144: Proactively communicate system retirement.
This involves:
Criterion 145: Retain AI system compliance records.
Any records related to an AI system, including those generated during retirement, must be preserved for agencies to demonstrate compliance and effectively respond to future audits and inquiries.
Criterion 146: Disable computing resources or components specifically dedicated to the AI system.
Criterion 147: Securely decommission or repurpose all computing resources dedicated to the AI system, including individual and shared components.
This involves:
Criterion 148: Finalise decommissioning information and update organisational documentation.
This involves: