Universiteit Leiden

nl en

Artificial intelligence can discriminate. How can this be prevented?

What do gender identity and digital technology have to do with each other? Together they are the subject of research at Leiden University. Researchers Tessa Verhoef and Eduard Fosch-Villaronga of the faculties of Mathematics and Natural Sciences and Law will investigate the interaction between artificial intelligence and gender identity. They receive a grant for this from the programme Global Transformations and Governance Challenges (GTGC).

Tessa Verhoef explains that new global systems can discriminate on the basis of gender if they are not properly designed. 'We are investigating the consequences of gender determination by artificial intelligence (AI). A system that mislabels or discriminates against people can obviously be very hurtful and harmful, especially for people from the LGBTQIA+ community, for example.'

Far-reaching consequences 

'Automated recognition systems are becoming increasingly important in society and support, for example, decision-making that can have far-reaching consequences for citizens. Think of an automatic rejection of an online credit application and even misdiagnoses of certain diseases.' says Eduard Fosch-Villaronga (Associate Professor of Law, Robots and AI). 

Globally, there are concerns that automatic recognition systems exacerbate and reinforce existing prejudices about gender, age, race and sexual orientation. The consequences of automatic gender classification, in particular, are poorly understood and often underestimated. People turn genders into stereotypes and even if there is a strong conviction of what gender is or should be, it is still used and interpreted too simplistically. 

Prejudices

In previous research the researhers found that Twitter makes classification errors much more frequently with women and the LGBTQIA+ community than with heterosexual men. 'This is not only annoying, but can also be painful. We also see that language applications reproduce gender stereotypes, with female words more often associated with family terms, while male words are more often linked to career terms.'

Harmful and outdated stereotypes and prejudices can even be reinforced in this way. In other areas of application too, such as health care, prejudice can lead to fatal consequences, which is very worrying.

Ethical guidelines

'There are two international human rights treaties that prohibit harmful and unlawful stereotyping, but they were written from the viewpoint that "man" and "woman" were the only recognisable genders. Now we have many more genders that are not mentioned in the conventions. Also, the current ethical rules for AI do not provide enough guidance in this area. We will now investigate how we can improve these rules.' says Fosch-Villaronga.

See below for a brief introduction to their research. Video in Dutch language. Source: YouTube kanaal Sleutelstad

Due to the selected cookie settings, we cannot show this video here.

Watch the video on the original website or

This project is a collaboration between the research disciplines of Computer Science, Law, Philosophy, Politics and Gender Studies. Besides a literature study and preliminary research, interdisciplinary workshops are organised. This will create a community at Leiden University around the theme of diversity and inclusion in AI and lay the foundation for larger grant applications and follow-up projects.

This website uses cookies.  More information.