New AI Student Network is brimming with ideas about studying and testing in the age of AI
How should tests and assessments be organised now that students can use generative AI? This was the subject of one of the first meetings of the newly established AI Student Network. From their unique perspective, the students came up with some very interesting ideas.
The AI Student Network was established by a group of Honours students from the bachelor’s programme Data Science and Artificial Intelligence. The network consists mainly of students who follow the interdisciplinary Honours College track Science, Society and Self at the Faculty of Social Sciences. They meet regularly at the Social Sciences faculty building, where they:
- support fellow students with technical questions, such as computer programming and working with AI tools;
- discuss the social, environmental, political and ethical implications of AI;
- explore how AI can be applied responsibly in different domains of society.
In this way, the AI Student Network functions both as a peer-support hub and as a space where students can reflect critically on the technologies they are helping to develop.
AI and assessment: how should universities respond?
At a recent meeting, the network focused on a pressing question for higher education: How should testing and assessment be organised now that students can use generative AI?
This discussion included Honours students who took the module ‘Artificial Intelligence: Understand and Create’. They explored ethical dilemmas around students using AI to write papers or complete assignments, and proposed solutions.
Looking at the process, not only the product
Honours student Martyna Kluba suggests redesigning written assignments into multiple stages, such as a literature overview, an outline, one or more drafts, and a final version. By assessing this process, teachers can see how a student’s argument and writing develop over time. This makes it harder to submit a suddenly AI-generated final text that does not match earlier work.
Martyna notes that this approach will not fully prevent AI use, but it can discourage simple copy-paste behaviour and promote more reflective engagement with AI tools. A key challenge is the increased workload for lecturers, who would need to read and assess more interim products.
Verbal quizzes to check understanding
Alexander Djurrema proposes adding a short verbal quiz alongside written assignments. In workgroups, teaching assistants could briefly ask students about the challenges they encountered, how they solved them, and to explain specific parts of their work in more depth. This helps examiners assess whether students genuinely understand what they have submitted.
Alexander also observes that AI tools often struggle with longer, complex assignments and require substantial checking and editing. Combined with verbal questioning, this reduces the benefits of outsourcing the full assignment to AI.
Rethinking exam formats
Nicolás Rodríguez argues that the rapid growth of generative AI is forcing universities to reconsider what assessment is for. He highlights several possible directions:
- Handwritten, in-person exams, which strongly protect authorship but are time-consuming to mark and demanding for students.
- Controlled digital environments, where students work on university computers without internet access or external software, allowing digital exams without AI tools.
- Oral examinations, where presentations and follow-up questions reveal whether students truly understand their material, while also raising concerns about feasibility for large cohorts and fairness for students with speaking anxiety.
According to Nicolás, AI will remain part of academic life, and universities should help students learn to use it responsibly. At the same time, assessment must allow examiners to distinguish between a student’s own understanding and machine-generated output.
Plans
The AI Student Network will continue to meet, support fellow students, and develop ideas on AI in education and society. In the coming period, they hope to work with lecturers and faculty staff to explore pilot projects based on these proposals and to further shape responsible AI use within the faculty.