RelationField: Relate Anything in Radiance Fields

Sebastian Koch Bosch Corporate Research / Ulm University Johanna Wald Google Mirco Colosi Bosch Corporate Research Narunas Vaskevicius Bosch Corporate Research Pedro Hermosilla TU Wien Federico Tombari Google Timo Ropinski Ulm University

accepted at IEEE Conference on Computer Vision and Pattern Recognition, 2025

Abstract

Neural radiance fields are an emerging 3D scene representation and recently even been extended to learn features for scene understanding by distilling open-vocabulary features from vision-language models. However, current method primarily focus on object-centric representations, supporting object segmentation or detection, while understanding semantic relationships between objects remains largely unexplored. To address this gap, we propose RelationField, the first method to extract inter-object relationships directly from neural radiance fields. RelationField represents relationships between objects as pairs of rays within a neural radiance field, effectively extending its formulation to include implicit relationship queries. To teach RelationField complex, open-vocabulary relationships, relationship knowledge is distilled from multi-modal LLMs. To evaluate RelationField, we solve open-vocabulary 3D scene graph generation tasks and relationship-guided instance segmentation, achieving state-of-the-art performance in both tasks.

Bibtex

content_copy
@inproceedings{koch2024relationfield,
	title={RelationField: Relate Anything in Radiance Fields},
	author={Koch, Sebastian and Wald, Johanna and Colosi, Mirco and Vaskevicius, Narunas and Hermosilla, Pedro and Tombari, Federico and Ropinski, Timo},
	booktitle={Proceedings of IEEE Conference on Computer Vision and Pattern Recognition}
	year={2025}
}