Fighting together at Leiden University against diversity bias in AI for the labour market
eLaw - Center for Law and Digital Technologies at Leiden University, hosted the first Horizon Europe BIAS Project co-creational workshop geared towards defining the requirements for identifying and mitigating diversity bias in AI systems used for recruitment purposes.
Designing AI solutions involves technological aspects and social, economic, political, and legal considerations. Since such AI solutions have the potential to impact society as a whole significantly, adopting a multidisciplinary and multi-stakeholder approach can be valuable in mitigating any related problems arising from their development. The Horizon Europe BIAS project, in which the eLaw Center for Law and Digital Technologies is a partner, embraces a participatory and co-creative approach to defining the requirements for identifying and mitigating bias in AI systems used for recruitment purposes.
To this end, on 4 July 2023, Carlotta Rigotti and Eduard Fosch-Villaronga from eLaw - Center for Law and Digital Technologies, hosted the first co-creation workshop of the HE BIAS project.
Accompanied by Niti Chatterjee and Alexa Zainea from the Adv. LL.M. in Law and Digital Technologies who helped in the execution of the workshop, the four facilitators engaged with a high-level expert crowd coming from different sectors, including HR experts, AI developers, interdisciplinary scholars, civil society organisations, and workers to interactively discuss gender, race, and disability bias in recruitment and HR management, especially when embedded in AI applications.
Participants kicked off the workshop with a panel discussion on biases in HR and recruitment. In small groups of 5-6 persons, and based on their knowledge and experience, they discussed whether there are good reasons to be optimistic about the AI uptake in the labour market or, if on the contrary, we should rather be concerned about it. After emphasising whether these developments would hamper diversity, equity, and inclusion (DEI), the groups shared their thoughts on how a hiring process should be fair and how such fairness could be sustained even in an AI-mediated labour market.
After the panel discussion, the workshop took a very hands-on approach and participants role-played a recruiting process. An HR officer discussed a fictious job offer and participants identified the biases deriving from the text. Then, the participants put themselves in the shoes of a fictious candidate with several intersectional characteristics beyond the majoritarian ones (in terms of gender, race, and disability) who was applying for that job position. They co-wrote a cover letter and then discussed together whether there were biases coming from it and the CV.
Participants realised that identifying biases in HR and recruitment is not straightforward and that, since diversity improves group thinking, it requires a careful thought process involving many people and disciplines. The facilitators captured the participants’ attitudes towards these processes, which will directly feed the construction of the debiaser tool, which is the main tool that the HE BIAS Project will develop to fight bias in the labour market.
With a feeling of curiosity on what the next steps of the project will be, participants left the workshop with a smile on their face, having worked together with other enthusiastic participants and grateful to the organisers for working on such important research for the future of work.
The BIAS Project: Would you like to participate?
The BIAS project aims to identify and mitigate diversity biases (e.g. related to gender and race) of artificial intelligence (AI) applications in the labour market, especially in human resources (HR) management.
To gain new and consolidated knowledge about diversity biases and fairness in AI and HR, the BIAS Consortium is currently involved in several activities that you might be interested in discovering and joining:
Help us map the personal experience and attitudes towards AI applications in the labour market. To fill out our survey, please use the following link.
- National Labs
If you want to stay tuned about our activities and/or participate in our project in different capacities, please join these national communities of stakeholders coming from different ecosystems: HR officers, AI developers, scholars, policymakers, representatives of trade unions, workers, and civil society organisations. To join the national labs, click on the following link.
- 2nd Co-creational workshop
Save the date! The second co-creational workshop is going to take place on 30 August 2023. Registration will open soon via this link. If you are interested in joining, please send an email to contact us.
The sister projects
The HE BIAS Project has two sister projects that were funded under the same programme from the European Commission: FINDHR and AEQUITAS.
FINDHR stands for Fairness and Intersectional Non-Discrimination in Human Recommendation and aims at facilitating the prevention, detection, and management of discrimination in algorithmic hiring and closely related areas involving human recommendation. In our workshop, we had the enormous privilege to welcome Nina Baranawoska from Radboud University and Clara Rus from UVA from this project - the first meeting of a hopefully very long working relationship.
Photo: Olivier Collet