Universiteit Leiden

nl en

Bart Custers in Trouw on ChatGPT and cybercrime

The EU proposal for a regulatory framework on artificial intelligence will not prevent the dangers of cybercrime or the spreading of fake news using ChatGPT. Cyber criminals can use the new technology to write harmful software, phishing mails and fake news.

Fortunately, there are other solutions says Bart Custers, Professor of Law and Data Science at eLaw, Center for Law and Digital Technologies, in an interview with Dutch newspaper Trouw (4 July 2023).

The AI Act aims to ensure that artificial intelligence is handled responsibly in Europe. For example, AI systems will be tested for possible violations of human rights and algorithms must not discriminate. The proposed regulation also prohibits certain types of artificial intelligence, such as AI that manipulates the subconscious and facial recognition by the police. The AI regulation, however, offers no solution to tackle one of the greatest dangers of AI - the criminals who abuse it.

ChatGPT has mastered several programming languages. These can be used by criminals with no technical knowledge of how to create software with which computer systems, for example, can be infected with viruses. Currently, the software that ChatGPT writes still contains errors, but that will not last long. The AI regulation will be unable to tackle this problem, Custers explained, but cybercrime legislation does fortunately offer that possibility. Tackling fake news can be done through other new EU legislation - the Digital Services Act - which took effect this year.

This website uses cookies.  More information.