Künstliche Intelligenz in der Radiologie – jenseits der Black-Box

Luisa Gallee Ulm University Hannah Kniesel Ulm University Michael Götz Ulm University Timo Ropinski Ulm University

RöFo - Fortschritte auf dem Gebiet der Röntgenstrahlen und der bildgebenden Verfahren, 2023

Abstract

Artificial intelligence (AI) algorithms allow for the efficient processing of large amounts of data by learning patterns based on training examples and reproducing them on new data. Specifically, machine learning (ML) methods have led to impressive advances, relying on improved techniques, larger data sets, and increased computing power. In radiology, these methods are becoming increasingly important for efficiently utilizing the growing amount of image data and their information. The range of applications includes more efficient image acquisition, automatic detection and segmentation of physiological and pathological tissue, and automated diagnostic support. However, a challenge in using such complex AI methods is the often difficult interpretability of the decision-making processes. In clinical routine, decisions, including those made with the help of AI algorithms, must be transparent and verifiable. This paper classifies AI methods into White Box, Black Box, and Gray Box and discusses techniques for explainability in AI. The focus is on the interpretability of the models, while emphasizing that examining the underlying data using explainability algorithms is an important step in the development of data-driven models, such as deep learning models.

Bibtex

content_copy
@article{gallee2023kuenstliche,
	title={Kunstliche Intelligenz in der Radiologie – jenseits der Black-Box},
	author={Gallee, Luisa and Kniesel, Hannah and G{\"o}tz, Michael and Ropinski, Timo},
	year={2023},
	journal={R{\"o}Fo - Fortschritte auf dem Gebiet der R{\"o}ntgenstrahlen und der bildgebenden Verfahren}
}