ChatGPT: Faculty strategy
ChatGPT and similar tools are on the rise. How can we deal with this as a university and faculty? There are concerns within the education sector about fraud by students. If they start relying on artificial intelligence, they will be unable to develop the skills they need to graduate. This affects not only education, but also other sections of the university and society. Besides looking at how we can prevent fraud, we are also exploring how in the long term we can actually work with applications like ChatGPT. Below, we explain what it is, when something is actually fraud, and, above all, how students can still develop their skills for written assignments.
What can you do with ChatGPT?
ChatGPT generates text in response to a prompt (a question or command given by a user) and based on a gigantic dataset of text material (originating from sources including e.g. the internet). Users can indicate how long the text should be and what form it should have. The program generates these texts by predicting (using statistics) what the next word in the text will most likely be. This prediction is based on a phrase or sentences that have been used before.
When is it fraud?
Although these AI programs may serve as a useful tool – comparable to a spelling checker or Wikipedia – they cannot replace a student’s own, original work. After all, students learn the most from the writing process. If a student uses an AI tool like ChatGPT and fails to mention this, the student is committing fraud.
How can you prevent fraud with ChatGPT?
It is not possible to prevent fraud entirely since AI is constantly developing. That said, the following seven tips may help to ensure (at least for now) that using a program like ChatGPT is made very difficult and that students continue to learn from the assignments themselves.
Such as Brightspace, syllabus or articles behind a pay wall etc.
ChatGPT makes no distinction between sources: a scholarly article, a comedian’s satirical column, a conspiracy follower’s blog – it’s all one and the same. Since there is no direct link between the generated text and the collection of texts on the basis of which ChatGPT makes the prediction, the program is unable to provide a reference to the source.
The amount and breadth of text material in the dataset means that ChatGPT often comes up with what seems like a reasonable answer to a simple open-end question. But ChatGPT is not (yet) very good at substantiating or weighing up different point of views.
For example: ‘What is incorrect or has not been considered in this argumentation?’
ChatGPT, in principle, can only process sections of text. If you ask for some other kind of product, the end result will prove useless.
At present, ChatGPT’s dataset does not go beyond 2021.
We do not want teaching staff to rush to replace written assignments, take-home exams, and other types of assessments with a written exam on location. This is logistically impossible. Moreover, the method of assessment stated in the course description in the e-Prospectus is part of the Course and Examination Regulations (OER) of the current academic year. So it is not possible to simply change this once the academic year has started. As mentioned above, we will also be looking at opportunities for actually using such AI programs in our work.