Universiteit Leiden

nl en

Article eLaw about Fair and equitable AI in biomedical research and healthcare

Eduard Fosch-Villaronga and Bart Custers from eLaw - Center for Law and Digital Technologies wrote an article on Fair Medicine and AI highlighting that AI for biomedical research and healthcare should be beneficent and equitable for everyone.

AI in medicine has evolved dramatically over the past five decades. In the healthcare sector, AI is increasingly used in biomedical research and clinical practice in tasks ranging from automated data collection to drug discovery, disease diagnosis, and robotic surgery. In short, the promise of AI in healthcare is that it can help provide safer and more personalized medicine for society soon. Although these advances entail incredible progress in medicine and healthcare delivery, the introduction and implementation of AI in healthcare raises various ethical, legal, and societal concerns. More research is needed for AI to perform well in the wild, and room for improvement can be found in diversity and inclusion.

Following this line of thought, Eduard Fosch-Villaronga and Bart Custers from eLaw Center for Law and Digital Technologies wrote a collaborative, position paper that shares the results of the international conference “Fair Medicine and AI” that was held online on 3–5 March 2021. Scholars from science and technology studies (STS), gender studies, and ethics of science and technology formulated opportunities, challenges, and research and development desiderata for AI in healthcare.

Titled Fair and equitable AI in biomedical research and healthcare: Social science perspectives and published in the prestigious journal Artificial Intelligence in Medicine, the authors highlight that AI systems and solutions may have undesirable and unintended consequences, including the risk of perpetuating health inequalities for marginalized groups. Socially robust development and implications of AI in healthcare require urgent investigation. There is a particular dearth of studies on human-AI interaction and how this may best be configured to deliver dependable, safe, effective, and equitable healthcare. To address these challenges, they argue the need to establish diverse and interdisciplinary teams equipped to develop and apply medical AI fairly, accountable, and transparently. To that end, they formulate the importance of including social science perspectives in developing intersectionally beneficent and equitable AI for biomedical research and healthcare, partly by strengthening AI health evaluation.

Highlights of the article

  • Bias, discrimination, and structural injustice for medical AI are overlooked issues.
  • AI for biomedical research and healthcare should be beneficent and equitable.
  • Social science perspectives within AI for medicine development are essential.
  • Challenges are multifold, and an agenda for future research is needed.
  • Qualitative, ethnographic, and participatory approaches could help provide fairer AI.

Access to the paper

You can access the paper by following this link to the ScienceDirect website

Connected research

Eduard Fosch-Villaronga, Hadassah Drukarch, Pranav Khanna, Tessa Verhoef, and Bart Custers wrote a paper on Accounting for diversity in AI for medicine highlighting how current algorithmic-based systems may reinforce biases in healthcare. This topic forms part of the research in the field of Diversity and AI conducted by Eduard Fosch-Villaronga at eLaw and Tessa Verhoef at the Creative Intelligence Lab at Leiden University.

This website uses cookies.  More information.