The S.E.C.U.R.E. framework supports responsible AI adoption in Higher Education by aligning with institutional policies, government regulations, professional codes, and industry standards.
UNESCO's AI Competency Framework for Teachers
The UNESCO AI Competency Framework for Teachers sets out a global benchmark for the knowledge, skills and values educators need to use AI safely, ethically and effectively. It emphasises a human-centred approach, professional agency, and systemic support—recognising that teachers should not be left to navigate AI risks alone. The S.E.C.U.R.E. GenAI Use Framework aligns with these goals by providing an institutional process that enables staff to assess and manage GenAI use without requiring technical expertise. It embeds ethical safeguards, supports data protection, and offers a clear escalation pathway when risks arise. In doing so, S.E.C.U.R.E. helps institutions meet the systemic conditions outlined by UNESCO, ensuring that AI use in education is guided by responsible governance, not individual discretion. Read more...

Higher Education Standards Framework (HESF)
The Higher Education Standards Framework contains the minimum acceptable requirements for the provision of Higher Education in or from Australia by higher education providers registered under the TEQSA Act. The S.E.C.U.R.E. framework helps higher education providers meet the HESF by embedding risk-based, policy-aligned decision-making into staff use of GenAI. It supports compliance with standards by providing a clear process for identifying, mitigating and escalating risks. The framework protects against data misuse, privacy breaches, and unethical practices, and ensures that GenAI use remains within institutional and regulatory boundaries. By operationalising policy at the point of use, S.E.C.U.R.E. enables consistent, auditable, and responsible practice across teaching, research and administration, supporting safe and effective GenAI use in line with sector obligations Read more...

Voluntary AI Safety Standard
The Voluntary AI Safety Standard offers practical guidance for Australian organisations on the safe and responsible use and development of artificial intelligence (AI). As part of the Australian Government’s Safe and Responsible AI agenda, the standard seeks to ensure that AI systems used in legitimate but high-risk contexts are developed and deployed safely, while enabling innovation in low-risk settings to continue with minimal constraint. The S.E.C.U.R.E. framework helps institutions meet this standard by providing a structured, context-sensitive approach to managing the risks of GenAI in higher education. It supports staff in identifying and mitigating potential harms in line with institutional risk appetite, promotes appropriate escalation pathways for high-risk use cases, and encourages transparency and documentation throughout the AI use lifecycle. Read more...

Universities 2024 Financial Audit Report
The Audit Office of NSW recently published the Universities 2024 Financial Audit report, which includes findings and recommendations on AI. One of the recommendations is that universities should "establish and implement an AI policy, and embed the consideration of AI use into governance and risk management frameworks". The S.E.C.U.R.E. framework supports universities in meeting this recommendation by providing a structured, risk-based approach to the use of generative AI (GenAI) tools. It facilitates informed decision-making regarding AI use by staff, ensuring that potential risks are identified and mitigated effectively. Read more...

Download the S.E.C.U.R.E. framework
S.E.C.U.R.E. GenAI Use Framework for Staff © 2025 Mark A. Bassett, Charles Sturt University Licensed under CC BY-NC-SA 4.0.
