Alignment with the Universities 2024 Financial Audit Report
The Audit Office of NSW recently published the Universities 2024 Financial Audit report, which includes findings and recommendations on AI.
The S.E.C.U.R.E. framework aligns closely with several key expectations set out in the Universities 2024 Financial Audit report, particularly those relating to governance, risk, digital capability, and internal controls. While its primary purpose is to guide safe and responsible use of generative AI tools by university staff, S.E.C.U.R.E. also contributes more broadly to institutional maturity in managing emerging technologies. Its design and implementation address multiple audit recommendations, making it a practical model for operationalising digital risk management across the sector.
The most direct point of alignment is with the recommendation that universities should "establish and implement an AI policy, and embed the consideration of AI use into governance and risk management frameworks." S.E.C.U.R.E. supports this by offering a structured, risk-based process that staff use before engaging with GenAI tools. It prompts users to assess potential harms across six domains and provides clear thresholds for when to proceed, when to pause, and when to escalate. This ensures that GenAI use is governed by institutional policies and aligned with formal oversight processes, rather than left to informal judgment or unmonitored experimentation.
Beyond this, S.E.C.U.R.E. contributes to the audit’s broader focus on strengthening institution-wide risk management. The framework enables distributed risk identification and response at the point of use, which is critical in the context of AI tools that are widely accessible and rapidly evolving. It brings risk assessment into everyday decision-making and ensures consistency across faculties and professional units, thereby reinforcing enterprise risk management principles in practice.
S.E.C.U.R.E. also supports the development of internal control mechanisms. It functions as a gatekeeping process that determines when AI tools can be used and under what conditions. By guiding staff to consider risk before use and document decisions, S.E.C.U.R.E. strengthens both preventative controls (through risk screening) and responsive controls (through escalation and record-keeping). This aligns with the audit’s findings that many institutions need to enhance internal controls to keep pace with new and emerging technologies.
Finally, the framework contributes to improving digital capability and culture across the university workforce. While not a formal training program, S.E.C.U.R.E. operates as a scaffold for building awareness and responsibility around AI. It introduces staff to key regulatory, ethical, and technical considerations and embeds those reflections into daily workflows. In doing so, it raises the baseline of digital literacy with regard to AI and supports the audit’s broader emphasis on ensuring staff are equipped to manage digital transformation responsibly.
Download the S.E.C.U.R.E. framework
S.E.C.U.R.E. GenAI Use Framework for Staff © 2025 Mark A. Bassett, Charles Sturt University Licensed under CC BY-NC-SA 4.0.
