- Home
- Resources
- AI-generated material in higher education
- Impact on examination and assessment
Impact on examination and assessment
The area within higher education most affected by generative AI is examinations.
Since generative AI can often replace students' own work, such as writing texts, there were questions early on about banning the use of generative AI entirely or shifting entirely to on-campus written examination. However, to make thoughtful and sustainable decisions about examinations, it is important to understand both how generative AI works and the extent to which examinations are affected.
Below you can find information to assist in planning and conducting examinations.
Services like ChatGPT, GitHub Copilot, jenni.ai, etc., can be used in various ways to generate different types of content or perform different types of intellectual work. This includes writing various texts, performing text analyses, writing or commenting on programming code, etc. There is a significant risk that individual students may be tempted to use these to solve assignments, which could be considered deceiving during examination. Therefore, it is absolutely necessary to make certain considerations in the design of examinations and to clearly inform about the applicable limits and boundaries.
The first step to prevent misleading and deceiving during examinations is to clarify how these types of services may or may not be used and how the use of services should be reported when allowed. There is uncertainty among students about where the boundaries lie, so it's important to make these visible.
Most importantly, it should be communicated that it is the student's knowledge/skills that is assessed and that generative AI should not be used to replace the work the student is supposed to do. In cases where it is partly permissible for students to use generative AI, this also needs to be clearly defined to clarify the boundaries (see "Guidelines and rules" under Suspicion of cheating below and also Guidance for teachers on using generative AI in education (pdf)).
There are already examples of guidelines from various parts of the University of Gothenburg, both for teachers to develop this type of information (example from the Department of Global Studies (pdf)) and for students, such as in thesis work (example from the Department of Applied IT (pdf)). These can be used as inspiration but make sure to tailor the information according to the conditions of the specific examination.
With the possibilities offered by generative AI, it is essential to review the forms of examination used. Some forms are more at risk than others, especially written take-home exams where the questions are formulated in a way that makes it easy for an AI to answer them. At the same time, it is important that the form of examination is based on the learning outcome that is assessed, so it is possible for the examiner to assess whether the students have met them. Therefore, it may also be relevant to consider combining multiple forms of examination or spending more time on guidance during the writing process. Note that the form of examination indicated in the course syllabus must be used, and that changing the form of examination thus requires changing the course syllabus. Deviating from the rules about the form of examination in the course syllabus is not allowed.
Below are some thoughts on forms of examination to reduce the risk of cheating using generative AI. You can read more about different forms of examination on PIL's web pages regarding examinations (Swedish only).
- On-campus written examination
Among the most common forms of examination, the traditional on-campus written examination is most effective in reducing the risk of students using generative AI to solve tasks. This is especially true in the case of the DISA (login required) examination, where access to various resources can be limited.
- Oral examination
An oral examination where the student and examiner are in the same physical location also provides good conditions. Although oral examinations can often be time-consuming, they can at the same time reduce the need for written feedback, thereby saving time in other areas.
- Take-home examination with clear and explicit reference management
At the time of developing these resources, there are still some difficulties for ChatGPT and similar services to handle references, especially if these are specified and included in the required reading. By requiring the use of references to course literature, it becomes more difficult to use generative AI to solve a task, although there is still some risk. If the task also involves deeper analysis based on, for example, case descriptions or documented personal experiences, it becomes even more challenging.
- Process
An important way forward may be to not only assess a finished result but to follow the development of a text or other product. This model, often used in supervising thesis projects, can be effectively applied in other examination settings as well. By following a student in the process of writing a text, there is also the opportunity to provide so-called formative feedback, where you as a teacher can support students in their work. This not only limits the possibilities of using generative AI to create an entire work but is also beneficial from a pedagogical perspective. In this scenario, teachers might perceive that reading texts is time-consuming, but it is important to note that this approach often reduces the workload during the actual assessment. Additionally, there is less need to give detailed summative feedback at the end.
- Combining examinations
By combining different types of examinations, you can also reduce the risk of cheating. For example, this could involve students submitting a text but also participating in an oral exam that either relates to the work they have done or constitutes a separate part.
Support for reviewing and revising forms of examination is offered in a workshop organized by PIL.
In addition to attempting to prevent cheating opportunities by reviewing examination formats, it is also important to work on other issues to give students the opportunity to develop a responsible approach to generative AI. This involves helping them with gaining knowledge about both generative AI and academic integrity to understand what constitutes responsible use of various types of services.
Furthermore, it can also be important to draw attention to the consequences of undesirable use of generative AI. If such services are used, for example, to generate solutions to tasks, students risk missing the opportunity to learn. Responsible use of generative AI means that it should not be used to replace, for example, creative or intellectual processes, but only as support in these areas.
To provide students with good conditions for developing a sound approach, it is important to refer to the available support, primarily through the Academic Language Unit (ASK) and the University Library. Students can access these through the Student Portal.
Lund University has produced a short film that addresses various aspects of the background and reasons for cheating:
Why do students cheat?
As every text generated by services like ChatGPT is more or less unique, there are no means to identify these using tools like Ouriginal (formerly Urkund). Since there is also no original source being copied, it's debatable whether it even classifies as plagiarism; rather, it is the independence of the work that can be questioned. The tools available for analysing texts are therefore poorly suited for identifying AI-generated texts, and we cannot rely on them in this context.
There are several services, known as classifiers or detectors, that claim to be able to analyse texts and assess to which extent they are AI-generated, but it has become very clear that there are significant problems with these. There are numerous examples of these services identifying texts as AI-generated when they are not (so-called false positives), and it is relatively easy to manipulate AI-generated texts to avoid detection. In addition to this, there are both ethical and legal aspects that prevent individual teachers from using these types of services to examine students' work. This concerns, for example, where data is processed and how the data is then used.
Read more about detectors in Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International journal for educational integrity, 19(1), 26-39. https://doi.org/10.1007/s40979-023-00146-z
In cases where there is suspicion that a student has used AI inappropriately, the procedure is the same as with other types of disciplinary matters. More information on this can be found in the Staff Portal: Disciplinary Matters (login required).
- Guidelines and rules
Use of generative AI in education is not prohibited at the University of Gothenburg. How generative AI is utilized in education (whether recommended, permitted, or restricted) is determined at the course, program, and/or department level, taking into account the applicable rules for the use of IT tools at the University of Gothenburg. According to the university's guidelines on generative AI directed to teachers and students, it is clear that the University's rules and regulations for first- and second-cycle examinations apply. The guidelines also emphasize the utmost importance that individual teachers/examiners inform students about the specific rules and guidelines that apply to each individual examination, regardless of whether the use of generative AI is allowed or not. More information can be found in the section "Information to students" above.
- Use of generative AI as a part of examinations
Recently, there have been more and more examples where generative AI has been used as a part of an examination. For instance, it could involve students having AI generate texts within a subject area, which they then need to analyze based on the course literature, or students being able to use generative AI as support in their work. If considering allowing students to use generative AI as part of an examination, it is important to design tasks in such a way that they do not risk impairing the ability of individual students to participate. Equality in education needs to be maintained, and in cases where the University of Gothenburg does not offer access to services or tools, one cannot design examinations/teaching that require students to use them. It is also very important to communicate both what constitutes permissible use and how any use of generative AI should be reported.
Since several generative AI services have shown relatively good ability to analyse texts, thoughts arise about using these to support assessment work. However, this is advised against from several different perspectives. Since grading is an exercise of official authority, it is important that the assessment work is performed by the person making the decision about the grade. There are also ethical aspects to consider, such as a student's work being used in a service without their knowledge or the risk that the work could be used to develop systems or services. There are also pedagogical aspects to consider, such as the importance of feedback, often linked with assessment work, in contributing to a student's learning process.