Alignment with the Voluntary AI Safety Standard

The Voluntary AI Safety Standard offers practical guidance for Australian organisations on the safe and responsible use and development of artificial intelligence.

Guardrail 1: Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.

Guardrail 1 of the Voluntary AI Safety Standard requires organisations to establish a clear framework for accountable AI use. This involves assigning leadership responsibility, documenting governance processes and ensuring alignment with legal and ethical obligations. It calls for the integration of AI oversight into existing organisational structures, supported by appropriate resources and staff capability. Ongoing training and formal procedures are essential to maintain compliance and promote safe and responsible use.

The S.E.C.U.R.E. framework aligns with this guardrail by embedding AI accountability within established institutional governance. It supports a clear decision-making structure by outlining when staff must escalate use cases to those with the authority and expertise to manage risk. Rather than leaving judgment to individuals, the framework provides a consistent and transparent process for identifying and addressing risks. This reinforces the principle that responsibility for AI use must be institutional and not left to individual discretion. The framework also builds internal capability by supporting informed decision-making across the organisation. It enables staff to assess their use of GenAI within defined risk thresholds without requiring technical expertise. Its alignment with university policies on data protection, ethical practice and compliance ensures that GenAI use remains consistent with broader institutional obligations. Although not a formal training tool, S.E.C.U.R.E. functions as a practical mechanism for learning. It encourages staff to reflect on their practice, document their decisions and seek advice where necessary. In doing so, it contributes to a culture of responsible AI use and supports the intent of Guardrail 1 by linking strategic oversight with day-to-day operational responsibility.

Guardrail 2: Establish and implement a risk management process to identify and mitigate risks.

Guardrail 2 requires organisations to implement a risk management process that reflects the specific risks associated with AI systems. This involves identifying and reassessing potential harms across the lifecycle of each system and ensuring that assessments align with the organisation’s risk tolerance. The guardrail highlights the need for ongoing monitoring and the application of control measures to manage the types of evolving and amplified risks that are often unique to AI technologies.

The S.E.C.U.R.E. framework aligns with this guardrail by providing a structured process through which staff can evaluate the risks of GenAI use before proceeding. It prompts users to consider key risk areas such as ethical impact, data sensitivity and rights protection and provides clear pathways for mitigation or escalation. By requiring staff to pause, reflect and document their actions when risk is identified, the framework supports continuous assessment rather than one-off approval. This ensures that decisions about GenAI use remain responsive to context and within the university’s accepted risk boundaries. In applying this process, S.E.C.U.R.E. enables localised risk management that fits within broader institutional governance. It connects directly with existing policies covering data protection, compliance and ethical conduct, helping to ensure that AI use is managed consistently across the organisation. The framework supports this without requiring staff to have technical expertise and instead builds confidence and responsibility through accessible guidance and clearly defined expectations. This helps foster a culture of safe and sustainable AI use across the higher education environment.

Guardrail 3: Protect AI systems and implement data governance measures to manage data quality and provenance.

Guardrail 3 requires organisations to implement data governance, privacy and cybersecurity measures that are fit for purpose in the context of AI. These measures must address the unique characteristics of AI systems, including amplified risks related to data quality, provenance and security vulnerabilities. Organisations are expected to assess and document the use, origin and handling of data throughout the AI system lifecycle, and ensure compliance with relevant frameworks such as the Australian Privacy Principles and the Essential Eight Maturity Model.

The S.E.C.U.R.E. framework aligns with this guardrail by explicitly addressing risks associated with data handling, confidentiality, personal information and intellectual property. Staff are prompted to consider whether GenAI use involves sensitive or identifiable data, breach of privacy expectations or improper disclosure of confidential or proprietary material. Where such risks are present, use cannot proceed without mitigation or formal approval. This ensures that sensitive data is not entered into GenAI systems without appropriate controls, protecting both institutional systems and individual privacy. The framework supports data governance by linking directly to institutional policies on data classification and privacy. It reinforces awareness of data security obligations and promotes consistent staff behaviour when working with third-party tools. Without requiring technical expertise, S.E.C.U.R.E. builds organisational capacity to manage data-related risks associated with GenAI, helping to ensure that AI use aligns with legal, ethical and contractual obligations across the institution.

Guardrail 4: Test AI models and systems to evaluate model performance and monitor the system once deployed.

Guardrail 4 of the Voluntary AI Safety Standard requires organisations to conduct structured testing of AI systems prior to deployment and to monitor their behaviour and performance over time. It calls for clearly defined acceptance criteria, rigorous pre-deployment evaluation, and ongoing oversight to detect unintended consequences and ensure systems continue to operate safely and appropriately. This guardrail is grounded in the assumption that organisations deploying AI have both visibility into system architecture and the capability to evaluate performance against verifiable benchmarks.

The S.E.C.U.R.E. framework does not align directly with this guardrail, as it does not involve technical testing or lifecycle monitoring of the GenAI systems themselves. It does not assess system-level performance or require formal validation of underlying models. Rather than attempting to fulfil the role of a system-level audit framework, S.E.C.U.R.E. operates at the point of use, supporting staff in making informed decisions about how and when to use GenAI tools in their work. Importantly, S.E.C.U.R.E. acknowledges that in many education contexts, it is neither reasonable nor possible for end users to perform detailed technical evaluations of proprietary GenAI models. Staff typically engage with commercially developed systems where model behaviour is opaque, testing environments are inaccessible, and usage is limited to interface-level interactions. In this context, S.E.C.U.R.E. shifts the emphasis from system evaluation to input and output evaluation. Rather than assuming staff can assess the internal workings of proprietary GenAI models, the framework focuses on what users put into the system and how they interpret and apply what it produces. It guides staff to consider whether the inputs contain sensitive or confidential information, and whether the outputs are accurate, appropriate, and ethically sound. This approach reflects the practical constraints faced by education institutions using third-party tools and acknowledges that meaningful oversight in such settings is more realistically achieved through informed human judgment at the point of use, rather than technical performance testing of the underlying model.

Guardrail 5: Enable human control or intervention in an AI system to achieve meaningful human oversight.

Guardrail 5 of the Voluntary AI Safety Standard requires organisations to ensure that human oversight is maintained throughout the lifecycle of AI systems. This includes assigning accountability to individuals with the competence and authority to intervene where necessary, establishing oversight and monitoring requirements, and providing adequate training to those responsible for using or managing AI systems. The guardrail emphasises that human control is essential to preventing harm and ensuring AI systems are used responsibly, particularly when multiple parties are involved in development and deployment.

The S.E.C.U.R.E. framework aligns with this guardrail by prohibiting the use of GenAI tools in ways that replace or bypass human judgment. It makes clear that AI-generated content must not be used to make decisions about individuals or groups without appropriate human review. This ensures that meaningful oversight remains in place even where staff use GenAI tools to support their work. The framework prompts users to evaluate outputs critically and to seek guidance or escalate when a proposed use might carry ethical or reputational risks. Rather than automating decisions, S.E.C.U.R.E. reinforces the role of human judgment in verifying, contextualising and approving AI outputs. It acknowledges the importance of situating AI use within the bounds of institutional authority and expertise. By embedding these expectations into everyday use, the framework supports the broader intent of Guardrail 5 to preserve human control in environments where technical intervention in the model itself is not feasible.

Guardrail 6: Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.

Guardrail 6 of the Voluntary AI Safety Standard requires organisations to inform end users when AI systems are involved in decisions, interactions or the generation of content. The aim is to build trust and ensure that people understand when they are engaging with or being affected by AI. This includes establishing clear, accessible communication processes, using appropriate disclosure methods such as labelling or watermarking, and ensuring that transparency is consistent across both internally developed and third-party systems.

The S.E.C.U.R.E. framework aligns with the intent of this guardrail but not its full operational detail. It supports responsible use by prompting staff to critically evaluate AI-generated content and consider the implications of using such outputs in professional or educational settings. Staff are expected to apply human judgment, ensure outputs are appropriate for their context and avoid using AI-generated material in ways that could mislead. In this way, S.E.C.U.R.E. reinforces an ethic of transparency and professional accountability. However, the framework does not require users to disclose when content has been generated by AI, nor does it provide formal mechanisms for informing stakeholders about AI involvement. There is no requirement for labelling, watermarking or proactive notification. These decisions are left to staff discretion, guided by professional standards rather than mandated processes. As a result, S.E.C.U.R.E. partially aligns with Guardrail 6 by promoting responsible behaviour but falls short of fully embedding the structured transparency measures described in the standard.

Guardrail 7: Establish processes for people impacted by AI systems to challenge use or outcomes.

This guardrail requires mechanisms that allow individuals to contest AI-influenced decisions that affect them. S.E.C.U.R.E. does not establish any process for contestability or redress. While it encourages human oversight and escalation of risk, it does not provide pathways for affected individuals (such as students or external stakeholders) to challenge outcomes or understand how decisions were informed by GenAI.

Guardrail 8: Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.

This guardrail focuses on understanding and documenting the provenance and characteristics of third-party AI tools. S.E.C.U.R.E. does not evaluate or require information about the supply chain or development practices behind the tools being used. It assumes the institution is working with third-party GenAI systems but does not require formal procurement checks or documentation at the system level.

Guardrail 9: Keep and maintain records to allow third parties to assess compliance with guardrails.

While S.E.C.U.R.E. encourages staff to document decisions about GenAI use, it does not mandate a consistent or auditable record-keeping process across the institution. There is no centralised system for storing or reviewing these records, and no formal link to broader compliance reporting or audit functions.

Guardrail 10: Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

While S.E.C.U.R.E. encourages staff to document decisions about GenAI use, it does not mandate a consistent or auditable record-keeping process across the institution. There is no centralised system for storing or reviewing these records, and no formal link to broader compliance reporting or audit functions.

Download the S.E.C.U.R.E. framework

S.E.C.U.R.E. GenAI Use Framework for Staff
 © 2025 Mark A. Bassett, Charles Sturt University
Licensed under CC BY-NC-SA 4.0.