Universiteit Leiden

nl en

eLaw joins Horizon Europe BIAS webinar on citizen science and AI technologies

On 9 October 2023, Carlotta Rigotti and Eduard Fosch-Villaronga participated in the BIAS webinar on citizen science and AI technologies, namely the first awareness-raising activity of the Horizon Europe BIAS project. They discussed online citizens’ engagement on tackling gender and intersectional biases in AI systems in the context of the Horizon Europe BIAS project.

For the BIAS Consortium, citizen science could provide a unique chance to engage people from diverse backgrounds in research and technological innovation. By involving citizens in data collection, analysis, and problem-solving, it is possible to tap into the collective intelligence of the crowd, accelerating scientific discovery and fostering a deeper understanding of our world, while making it more inclusive and fair.

In their presentation, Carlotta and Eduard covered the desk and qualitative research that they have conducted in the past months. The ultimate aim was to gain new and consolidated knowledge about citizens' attitude towards fairness diversity bias of AI applications in the labour market. Overall, such research outputs could be valuable in understanding public sentiment, identifying potential areas of improvement in AI applications, and working towards more fair and unbiased systems.

Your voice matters!

We would like to invite you to take our survey on discrimination, exclusion, and marginalisation of workers that the use of artificial intelligence (AI) applications in the labour market can sometimes cause.

Join our community!

If you would like to be further involved in the BIAS project, join our national pool of stakeholders, by clicking on this link

See the website of Citizen Science and Artificial Intelligence Technologies for more information about the webinar.

This project has received funding from the European Union's Horizon Europe programme under the open call HORIZON-CL4-2021-HUMAN-01-24 - Tackling gender, race and other biases in AI (RIA) (grant agreement No. 101070468)

This website uses cookies.  More information.