Universiteit Leiden

nl en

‘We want to experiment with AI in a responsible and proactive way’

At the Faculty of Governance and Global Affairs (FGGA), a great deal of work is being done to develop a future proof approach to artificial intelligence. With the FGGAi programme and the introduction of the new tool LUChat, the faculty is taking an important step towards safe, responsible and innovative use of AI. We spoke with Cameron Hope about what is coming next and why this development matters.

What exactly is FGGAi?

‘FGGAi is the programme that brings together all policy development, training and tooling related to AI within our faculty. FGGA has chosen to take a leading role within the university. This means we want to give staff as much space as possible to experiment safely with this technology.

Until now, the main obstacle has been the tooling. Last year we introduced a framework that clarified what is legally permitted. It was already more flexible than the university‑wide policy, but in practice much of our work still fell into the red, prohibited category. As a result, we were not able to gain as much experience as we would have liked.

Our focus now is on creating a safe, private and affordable way to use AI. This will soon be possible with the introduction of LUChat.’

What is the difference between FGGAi and LUChat?

‘LUChat is a tool: a safe, private version of ChatGPT. It runs on stateless servers within the EU, hosted in our own private cloud. This allows users to work completely privately with large language models. The model was developed by OpenAI, but there is no connection with OpenAI. No data is shared, stored or used for training.

But a tool on its own is not enough. That is why we have FGGAi: the programme that also includes onboarding, training, AI skills modules and the policy framework.’

Why is FGGA taking the lead?

‘For now, this work is mainly happening within FGGA, although I believe the entire university could benefit from it. We have a strong innovation partnership with the ISSC, which means FGGA is the first to gain access to new functionalities. That aligns with our decision to take the lead.

AI brings both opportunities and challenges, and we can only make the most of them if we gain handson experience. We know that the use of shadow AI within the university is widespread and that the need for safe alternatives is clear. Last November, around 600,000 messages were sent to generative AI platforms such as ChatGPT every week from within the university. In February 2026, that was over 1,000,000. That is one million potential data breaches, one million possible erosions of our academic independence, every week. The demand for these tools is clearly high, but there are serious security and privacy concerns. We need a tool that is technically as strong as these platforms, but safe and secure.

Many colleagues see that AI can improve the quality or efficiency of their work. That is why we want to actively explore what is possible.

AI also plays a prominent role in our new multi‑year strategy. We are aiming for a balanced approach: on the one hand we want to make tools like LUChat available as early as possible, but on the other hand the core of the strategy is about truly developing AI skills within FGGA.’

Why is it important to approach this carefully?

‘Understanding the technology behind these tools is crucial. Generative AI forces universities to ask difficult questions about how, what and why we do the things we do. At the same time, we work with data for which we are responsible. It is therefore important to understand the trade‑offs, risks and costs of this technology.

One of the biggest challenges is the speed at which AI is being adopted. The telephone took fifty years to become widely adopted, electricity even a hundred. The smartphone became mainstream in less than ten years, but Generative AI has become widely used within a year or two of being publicly available. With generative AI, the usefulness depends entirely on the context, and the impact ranges from small efficiency gains to fundamentally rethinking how we measure student success.

If you ignore it, you fall behind. But that does not mean we should dive in blindly. We want to create a middle ground: responsible and proactive experimentation. That requires policy, technology and training.’

What can staff expect in the coming months?

‘Behind the scenes we are working with a group of pioneers who are testing LUChat and providing feedback to the developers. I expect a stable version of the tool to be available to all staff before the summer.

We will also introduce training modules and an update to our policy.’

Where can people go with questions?

‘You can always contact me at c.a.hope@fgga.leidenuniv.nl. There is also a short FAQ available LUChatFAQ.aspx '

This website uses cookies.  More information.