Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance

Daniel Wolf Universitäts Klinikum Ulm Tristan Payer Ulm University Cathrina Silvia Lisson Universitäts Klinikum Ulm Christoph Gerhard Lisson Universitäts Klinikum Ulm Meinrad Beer Universitäts Klinikum Ulm Michael Götz Ulm University Timo Ropinski Ulm University

Computers in Biology and Medicine., 2024

Abstract

Self-supervised pre-training of deep learning models with contrastive learning is a widely used technique in image analysis. Current findings indicate a strong potential for contrastive pre-training on medical images. However, further research is necessary to incorporate the particular characteristics of these images. We hypothesize that the similarity of medical images hinders the success of contrastive learning in the medical imaging domain. To this end, we investigate different strategies based on deep embedding, information theory, and hashing in order to identify and reduce redundancy in medical pre-training datasets. The effect of these different reduction strategies on contrastive learning is evaluated on two pre-training datasets and several downstream classification tasks. In all of our experiments, dataset reduction leads to a considerable performance gain in downstream tasks, e.g., an AUC score improvement from 0.78 to 0.83 for the COVID CT Classification Grand Challenge, 0.97 to 0.98 for the OrganSMNIST Classification Challenge and 0.73 to 0.83 for a brain hemorrhage classification task. Furthermore, pre-training is up to nine times faster due to the dataset reduction. In conclusion, the proposed approach highlights the importance of dataset quality and provides a transferable approach to improve contrastive pre-training for classification downstream tasks on medical images.

Bibtex

content_copy
@article{wolf2024less,
	title={Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance},
	author={Wolf, Daniel and Payer, Tristan and Lisson, Cathrina Silvia and Lisson, Christoph Gerhard and Beer, Meinrad and G{\"o}tz, Michael and Ropinski, Timo},
	year={2024},
	journal={Computers in Biology and Medicine.},
	volume={183},
	doi={10.1016/j.compbiomed.2024.109242}
}