Levels of explicability for medical artificial intelligence: What do we need and what can we get?

Frank Ursin Medical University Hannover Felix Lindner Ulm University Timo Ropinski Ulm University Sabine Salloch Medical University Hannover Cristian Timmermann University of Augsburg

2023

Abstract

The umbrella term explicability refers to efforts to reduce the opacity of artificial intelligence (AI) systems. These efforts are considered crucial for diagnostic AI applications because there are tradeoffs between accuracy and opacity. This entails ethical tensions because it is desired by doctors and patients to trace how results are produced while improving performance without ethical compromises. The centrality of explicability invites to reflect on the ethical requirements for diagnostic AI systems. These requirements originate from the fiduciary doctorpatient-relationship and contain aspects of informed consent. Therefore, we address the question: “What level of explicability is needed to properly obtain informed consent when utilizing AI?” The aim of this work is to determine the levels of explicability required for ethically defensible informed consent processes and how they can technically be met by developers of medical AI. We proceed in four steps: First, we define the terms commonly associated with explicability as described in the literature, i.e. explainability, interpretability, understandability, comprehensibility, demonstrability, and transparency. Second, to place these results in context, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. The framework consists of the five elements of informed consent: information disclosure, understanding, voluntariness, competence, and the decision. Third, each of these aspects is examined in relation to the different components of explicability identified in the first step. These results allow to conclude which level of explicability physicians must provide and what patients can expect. In a last step, we survey whether and how the identified levels of explicability can technically be met from the perspective of computer science. To this end, we discuss recent attempts of developing explainable AI. Throughout our work, we take diagnostic systems in radiology as an example, because AI aided diagnostic systems are already commercially available and are clinically applied in this specialty.

Bibtex

content_copy
@incollection{ursin2023levels,
	title={Levels of explicability for medical artificial intelligence: What do we need and what can we get?},
	author={Ursin, Frank and Lindner, Felix and Ropinski, Timo and Salloch, Sabine and Timmermann, Cristian},
	year={2023},
	booktitle={Medicina Historica - Supplement N. 2},
	volume={6},
	pages={42--43},
	note={Enhancing dialogue to bridge the gaps in Bioethics - Abstract Book}
}