Alignment with the Jisc (UK) Principles for the use of AI in FE colleges
Jisc, in partnership with the Association of Colleges, established six principles in 2023 to guide the responsible use of artificial intelligence in further education colleges.
Standard 1: Safe, ethical and responsible use
Jisc advises colleges to place safety, security, transparency, fairness, accountability and the right to challenge AI decisions at the centre of their AI strategy. The S.E.C.U.R.E. framework helps staff achieve this by guiding them through six key areas of risk. These include checks on security credentials, ethical use, confidential information, personal data, rights protection and output evaluation. Staff are required to consider potential risks, document their decisions and, where necessary, escalate use cases for further review. This supports a proactive and consistent approach to ethical AI use.
Standard 2: Building AI skills and literacy
Jisc encourages colleges to help learners develop both practical AI skills and broader critical understanding. While S.E.C.U.R.E. is not a teaching resource, it supports skill development for staff. As educators work through the framework, they become more familiar with the strengths and limitations of generative AI. Over time, this regular engagement builds professional capability and reinforces best practice.
Standard 3: Enabling staff to use AI effectively
Jisc highlights the potential of AI to support staff with productivity and teaching innovation. S.E.C.U.R.E. enables this by helping staff identify low-risk uses of AI that can be implemented immediately. For more complex or sensitive applications, the framework sets out a clear process for escalation and oversight. This allows institutions to support innovation while managing risk appropriately.
Standard 4: Equality of access
Jisc calls on institutions to ensure that AI tools are accessible to all learners, regardless of background or ability. S.E.C.U.R.E. supports a consistent and institution-wide approach to the use of generative AI. This reduces variability in practice between departments and helps ensure that AI use is governed fairly across the organisation. While S.E.C.U.R.E. does not address access directly, it strengthens the foundations for equitable implementation.
Standard 5: Governance and workforce development
Jisc emphasises the need for clear governance, staff training and alignment with professional standards. S.E.C.U.R.E. meets this expectation by embedding governance into everyday use. Staff are asked to document their decisions and to escalate any use case that exceeds defined risk thresholds. This creates a transparent record of AI use and supports compliance with institutional policies and regulatory frameworks.
Standard 6: Strategic consistency and collaboration
Jisc encourages colleges to work together, align their approaches and build on shared national guidance. S.E.C.U.R.E. is available under a Creative Commons licence and can be adopted across the sector. It offers a consistent and repeatable process for assessing generative AI use that aligns with broader policy, risk and quality frameworks. This helps institutions maintain coherence in their AI strategies while supporting collaborative practice.
Download the S.E.C.U.R.E. framework
S.E.C.U.R.E. GenAI Use Framework for Staff © 2025 Mark A. Bassett, Charles Sturt University Licensed under CC BY-NC-SA 4.0.