Universiteit Leiden

nl en

The BIAS project attends an international co-creation workshop in Venice

eLaw - Center for Law and Digital Technologies at Leiden University, participated in the Horizon Europe BIAS Project international co-creational workshop geared towards discussing fairness of AI applications in the hiring process and ALTAI requirements.

In developing AI applications, it is imperative to consider not only technological aspects but also the broader spectrum of social, economic, political, and legal dimensions. The impact of technological innovation on society underscores the importance of adopting a multidisciplinary and multi-stakeholder approach to address potential challenges. The Horizon Europe BIAS project, in which the eLaw Center for Law and Digital Technologies is a partner, adopts a participatory and co-creative approach to defining the requirements for identifying and mitigating diversity bias of AI systems used for recruitment and selection purposes.

Carlotta Rigotti and Eduard Fosch-Villaronga

For this purpose, on 7 December 2023, Carlotta Rigotti and Eduard Fosch-Villaronga from eLaw - Center for Law and Digital Technologies, participated in the international co-creation workshop hosted in Venice and organized by Smart Venice, as part of the BIAS project. They were joined by Naomi Krosenbrink (Octagon International), Charlotte Baarda (College voor  de Rechten van de Mens), both members of the BIAS Dutch National Lab and renowned experts in diversity and AI in the labor market. Additionally, Elisa Parodi (Universit√† degli Studi di Torino) and Keith Marais (Authentistic Research Collective) were invited to bring their expertise in labor law and disability studies.

Together with the wider BIAS Consortium and other national stakeholders, this diverse group engaged in several activities focusing on the fairness of AI applications in the hiring process. Group discussions, role-plays, and plenary sessions involved the recruitment of six fictitious job applicants using an AI application designed to identify and mitigate diversity biases. Subsequently, the workshop delved into the analysis of the Assessment List for Trustworthy AI, discussing its ethical principles and their contextualization in the hiring process. The event also included a needs analysis, exploring what HR practitioners should learn about diversity bias and the use of AI applications.

Curious about the upcoming steps in the BIAS project and their potential involvement, participants concluded the workshop with smiles, marking the end of this series of co-creation workshops.

The The BIAS Project: Would you like to participate?

The BIAS project aims to identify and mitigate diversity biases (e.g. related to gender and race) of artificial intelligence (AI) applications in the labor market, especially in human resources (HR) management.

To gain new and consolidated knowledge about diversity biases and fairness in AI and HR, the BIAS Consortium is currently involved in several activities that you might be interested in discovering and joining, like capacity-building sessions and ethnographic fieldwork.

If you want to stay tuned about all our activities and/or participate in our project in different capacities, please join these national communities of stakeholders coming from different ecosystems: HR officers, AI developers, scholars, policymakers, representatives of trade unions, workers, and civil society organisations.

To join the national labs, click on this link.

 

This website uses cookies.  More information.