College Guidance on the use of generative AI tools (e.g. ChatGPT)

The information outlined on this webpage was last updated on 7 December 2023.

 Several natural language processing AI models have come to prominence in recent months, such as generative AIs like ChatGPT. These models demonstrate a huge step forward in accessible AI which will develop substantially and quickly; likely growing to become something we use frequently in our everyday lives.  

For staff and students, these AI models present both opportunities for our education and risks for the integrity of our assessments. 

Jisc has also recently released a primer on generative AI, which staff and students may find useful to consult.

A working group has been established to explore the development, opportunities and implications these models and details on the work of this group will be published on this webpage. The first major output of the group was approved in September 2023. Please consult the College's outline AI Tools in Teaching and Assessment Guidance 

 

Frequently asked questions

How does the use of AI models such as ChatGPT affect Imperial staff and students?

The perceived ability of these software to ‘do our work for us’ has prompted concern for the implications for academic integrity should students submit AI-generated work as their own. The focus on problem-solving in STEMMB subjects and the range of Imperial’s assessment types limit the capability of these AI models being able to produce highly refined answers to our assessments, but the impact that will have on quality assurance is still a concern. 

What are the limitations of generative AI models?

These models have ingested information available on the internet. In the case of ChatGPT this ingestion only covers information prior to 2021, meaning it has little ‘knowledge’ of current events. These models display limited success in handling mathematical information and code. Their predictive model means they often cannot distinguish between accurate references and fabrications. However, it can be expected that their power and accuracy will develop continuously and rapidly.

What can generative AI do?

Generative AI models have the ability to mimic language in response to human-entered prompts. In some contexts, it can use such prompts to create detailed written responses that reflect general knowledge on the subject matter. 

Further information on ChatGPT usage

Information for students

  • AI models are powerful and can be an effective way to check the quality of your written work, prompt new ideas, or generate simplified explanations of complex topics to support your learning.  
  • Submitting work and assessments created by someone or something else, as if it was your own, is plagiarism and is a form of cheating and this includes AI-generated content. Please refer to the College’s Academic Misconduct Procedures for further information. 
  • To ensure quality assurance is maintained, your department may choose to invite a random selection of students to an ‘authenticity interview’ on their submitted assessments. This means asking students to attend an oral examination on their submitted work to ensure its authenticity, by asking them about the subject or how they approached their assignment. Being invited to an authenticity interview does not mean that there is any specific concern that you have submitted work that is not your own. 
  • Natural language processing models work by predicting what text is most likely to follow previous text based on the information it has ingested. Therefore, it can often return incorrect or false information. For example, it may return non-existent academic references. 

Information for staff

AI such as this is likely to become a commonplace tool in our lives and careers going forward. There is potential benefit in adopting an ‘educative’ approach to the use of AI-generated content in our educational programmes. This might include: 

  • Strengthening academic integrity and plagiarism awareness training and reviewing the uptake of this training across all educational programmes and levels.  
  • Recognising the potential for AI models as tools to support students in raising the quality of their work. The power of AI models to review accuracy of work and support idea generation may be considered valid uses of these tools and representative of the role they are likely to play in students’ future careers.  
  • Ensuring that assessments are developed to assess learning outcomes that test and give credit for high-order skills that cannot yet be replicated by AI, such as critical thinking and synthesis of new ideas, and ensuring that the specific questions posed in assessment are refreshed regularly.  
  • Exploring opportunities presented by the adoption of AI models to enhance the educational experience. For example, the use of AI as a tool for formative feedback. 
  • Considering the implications of this technology for the integrity of examinations which are sat remotely or allow students to ‘bring their own devices’ and how such instances can be controlled. 
  • To ensure quality assurance is maintained, a department may choose to invite a random selection of students to an ‘authenticity interview’ on their submitted assessments. This means asking students to attend an oral examination on their submitted work to ensure its authenticity, by asking them about the subject or how they approached their assignment. Students should be selected at random, and it should be explained to students that an invitation does not mean there is any specific concern that they have submitted work that is not their own.
  • Familiarising yourself with the AI software available in your discipline, its strengths and weaknesses, the type of assessments it is able to solve, and if the assessments you set are susceptible.
  • Do not rely on AI detection software. This is unproven technology and we may not have permission to upload student work to external sites.

Approach to AI detection tools and Otter.AI

Members of the Imperial community may be aware that there are several artificial intelligence ‘detection’ tools available on the market, including from technology providers who currently provide the College and other universities with plagiarism detection tools. 

Academic integrity is at the heart of all we do. At this time, the College does not intend to deploy any additional AI detection functionality due to concerns regarding the maturity of these products and their ability to accurately identify incidents of students utilising AI without the express permission of their teacher or outside the parameters of what has been agreed for their programme. These concerns are widespread across the higher education sector and are well-documented.
 
Our current approach, in line with many other universities in the UK, is to train our staff to understand AI, identify its various uses, set parameters for those uses within students’ programmes, and be alert to the common features of AI-generated work. 

The aforementioned approach is not prejudicial to the College deciding to review this decision in future, should we and the wider sector have greater confidence in any technological solutions which may become available.

With regards to the platform 'Otter.AI', the College has decided to implement a temporary access block on our networks. This will be in place until a review has taken place on the potential uses of this platform.

Working Party Membership and Terms of Reference

The membership of the Working Party and the terms of reference of the group are available by browsing: AI Tools Working Party.

The group continues its work into the new calendar year (2024) and intends to launch several new initiatives aimed at supporting the staff community navigate the onset of AI.