Weakly Supervised Virus Capsid Detection with Image-Level Annotations in Electron Microscopy Images

Hannah Kniesel Ulm University Leon Sick Ulm University Tristan Payer Ulm University Tim Bergner Ulm University Kavitha Shaga Devan Ulm University Clarissa Read Ulm University Paul Walther Ulm University Timo Ropinski Ulm University Pedro Hermosilla TU Wien

International Conference on Learning Representations, 2024

Abstract

Current state-of-the-art methods for object detection rely on annotated bounding boxes of large data sets for training. However, obtaining such annotations is expensive and can require up to hundreds of hours of manual labor. This poses a challenge, especially since such annotations can only be provided by experts, as they require knowledge about the scientific domain. To tackle this challenge, we propose a domain-specific weakly supervised object detection algorithm that only relies on image-level annotations, which are significantly easier to acquire. Our method distills the knowledge of a pre-trained model, on the task of predicting the presence or absence of a virus in an image, to obtain a set of pseudo-labels that can be used to later train a state-of-the-art object detection model. To do so, we use an optimization approach with a shrinking receptive field to extract virus particles directly without specific network architectures. Through a set of extensive studies, we show how the proposed pseudo-labels are easier to obtain, and, more importantly, are able to outperform other existing weak labeling methods, and even ground truth labels, in cases where the time to obtain the annotation is limited.

Bibtex

content_copy
@inproceedings{kniesel2024weakly,
	title={Weakly Supervised Virus Capsid Detection with Image-Level Annotations in Electron Microscopy Images},
	author={Kniesel, Hannah and Sick, Leon and Payer, Tristan and Bergner, Tim and Shaga Devan, Kavitha and Read, Clarissa and Walther, Paul and Ropinski, Timo and Hermosilla, Pedro},
	booktitle={Proceedings of International Conference on Learning Representations}
	year={2024}
}