糖心Vlog

Guidance on using AI tools for business processes

Purpose and scope

1. We recognise the range of Generative AI tools available to support business processes at the University, and the opportunity this represents for us. We are equally aware that the use of AI for this purpose is fast-developing and that we must identify and manage emerging risks as an institution and act in compliance with our Information Security Policy and . This guidance relates to the use of AI within business operations. Staff or student use of AI for the purposes of learning and teaching is not covered by this guidance. Information to support these activities can be found here.

2. The responsible use of Generative AI tools by staff at the 糖心Vlog will comply with all laws and regulations applicable to the use of Generative AI including the UK and the . 

Definition of AI

3. For the purposes of this guidance:

  • Generative AI’ is a system that may generate new outputs in a range of possible formats such as text, images, or sounds. An example of this is using a Large Language Model (LLM) to summarise text and data and to provide this in a different format. 
  • A Large Language model (LLM) is an advanced computer program that processes and generates human-like text based on a vast amount of language data it has learned, by identifying patterns in words or phrases.
  • Generative AI is not the same as Robotic Process Automation (sometimes called RPA or automation) whereby a task is repeatedly performed without human intervention according to set parameters.

Our principled approach

4. Staff using Generative AI in their work must ensure that they comply with our Information Security Policy and  and follow the principles below:

  • Integrity: Generative AI tools must be used in a way that aligns with University ethical principles and  values of equality, diversity, and inclusion.
  • Judgement and Accountability: Staff are accountable for decisions made with the assistance of Generative AI.  Generative AI should augment not replace human judgement. Generative AI is known to “hallucinate”, which means that a Generative AI system can generate content that is not grounded in reality.  Generative AI can make mistakes and create bias. Always double-check and use your own judgement as to the accuracy and potential bias of the data.
  • Transparency. The University should be transparent about its use of Generative AI. Where Generative AI is used to support decision making, users must clearly state how. For example, when Generative AI is used to produce meeting notes, they should be checked by the Chair/organiser of the meeting and attendees must be made aware of the use of Generative AI. Further guidance is available here.
  • Governance: Generative AI usage by staff must comply with our Information Security Policy and all relevant legal requirements.

Using generative AI tools

5. The use of approved Generative AI tools may be considered for the following purposes:

  • To enhance operational efficiency
  • To support and inform decision-making
  • To improve service delivery to students, staff and other stakeholders
  • To support every student from every background to fulfil their potential at the University 

6. The potential uses of Generative AI are far reaching. The following are examples of how Generative AI might be used to support University business processes:

  • To summarise and improve the ‘readability’ of information for presentation in a different format, for example using the contents of a Word document to generate a PowerPoint presentation
  • To summarise documents (for example to help prepare for a committee meeting or prepare a paper)
  • To produce draft meeting notes or minutes before they are reviewed by the Chair/meeting organiser
  • To identify trends in a dataset, that does not contain sensitive data, which may improve the efficiency of report writing or similar activities
  • To support student and staff enquiries with staff oversight
  • To help to draft emails more efficiently

Where the use of generative AI is not permitted

7. Generative AI tools may not be used for the following purposes:

  • Decision-making: Generative AI should never be the sole decision maker. For example, decisions relating to admissions, staff recruitment, promotions, or performance management
  • Processing or analysing confidential or personal information as outlined in our Information Security Policy
  • Where the Generative AI tool is a ‘personal’ account and has not been approved for use by the University

Approval process for generative AI tools

8. The approved Generative AI tools where we have an established contractual agreement and are licenced to the University is Zoom AI Companion and . Staff using these tools should be logged in from their work account. These are Generative AI tools with who we have an established contractual agreement and are licenced to the University. Staff wishing to use a Generative AI tool not listed on this page must request that this is considered by the Deputy CTO via the DITS Helpdesk in the first instance.

Non-compliance with this guidance

9. Non-compliance with this guidance may result in the revocation of access to Generative AI tools, network access and disciplinary action. 

Guidance review

11. On an annual basis, the AI Advisory Group will:

  • Review the institutional use of agreed Generative AI tools to assess engagement, effectiveness, and adherence to the principles.
  • Review the guidance provided to staff regarding the use of Generative AI.
Arrow symbol
Still need help?
Get in touch