The integration of Artificial Intelligence (AI) into clinical care evolves and progresses rapidly, moving from diagnostic image analysis in radiology and dermatology to ever more complex applications such as predictions in intensive care units or psychiatric care. The most intricate ethical challenges arising from such AI systems in medicine are linked to epistemological questions, especially when the very design of a system renders it opaque to human understanding and threatens trust in its use.
While opaque AI systems such as deep learning models pose problems for many of its potential applications in society, their impact on medicine raises specific questions that require careful philosophical, ethical, and legal deliberation. How can informed consent, the bedrock of medical ethics, be obtained if a system is in principle unintelligible to its users? How can accountability be attributed in the complex field of interactions between patients, medical professionals, formal and informal caregivers, AI developers, and the AI system itself? What is a useful conceptual framework for trust in an AI system in medicine? And ultimately what makes trustworthy AI in medicine?
Call for abstracts
In addition to the invited speakers, we invite the submission of abstracts for the meeting from early career scholars (students, postdoctoral researchers, and junior faculty).
Abstracts should be suitable for a 30-minute talk (including discussion) and should be submitted as pdf files by July 7 2022 to: trustai2022@gmail.com. Submissions should include name, affiliation, title and an extended abstract (up to 500 words, not including references).
Notification of outcomes will be made by July 20 2022.
We are committed to fostering diversity and equality. Submissions from underrepresented groups are particularly welcome.
Organisers: Marcello Ienca (EPFL), Georg Starke (EPFL), Felix Gille (U. Zürich), Alessandro Facchini (IDSIA USI-SUPSI) and Alberto Termine (U. Milano).