Unified Domain Generalization and Adaptation for Multi-View 3D Object Detection


The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024)

1Korea University   2Samsung Advanced Institute of Technology  

Poster

Presentation Video

Abstract

Recent advances in 3D object detection leveraging multi-view cameras have demonstrated their practical and economical value in various challenging vision tasks. However, typical supervised learning approaches face challenges in achieving satisfactory adaptation toward unseen and unlabeled target datasets (i.e., direct transfer) due to the inevitable geometric misalignment between the source and target domains. In practice, we also encounter constraints on resources for training models and collecting annotations for the successful deployment of 3D object detectors. In this paper, we propose Unified Domain Generalization and Adaptation (UDGA), a practical solution to mitigate those drawbacks. We first propose Multi-view Overlap Depth Constraint that leverages the strong association between multi-view, significantly alleviating geometric gaps due to perspective view changes. Then, we present a Label-Efficient Domain Adaptation approach to handle unfamiliar targets with significantly fewer amounts of labels (i.e., 1% and 5%, while preserving well-defined source knowledge for training efficiency. Overall, UDGA framework enables stable detection performance in both source and target domains, effectively bridging inevitable domain gaps, while demanding fewer annotations. We demonstrate the robustness of UDGA with large-scale benchmarks: nuScenes, Lyft, and Waymo, where our framework outperforms the current state-of-the-art methods.

Method

Overview

To successfully develop and deploy Multi-view 3DOD models, we need to solve two practical problems: (1) the geometric distributional shift across different sensor configurations, and (2) the limited amount of resources (e.g., insufficient computing resources, expensive data annotations). The first problem poses a challenge in learning transferable knowledge for robust generalization in novel domains. The second issue inevitably requires efficient utilization of computing resources for training and inference, as well as label-efficient development of 3DOD models in practice. To tackle these practical problems, we introduce a Unified Domain Generalization and Adaptation (UDGA) strategy, which addresses a series of domain shift problems (i.e., learning domain generalizable features significantly improves the quality of parameter- and label-efficient few-shot domain adaptation).

BibTeX

        
        @misc{chang2024unifieddomaingeneralizationadaptation,
            title={Unified Domain Generalization and Adaptation for Multi-View 3D Object Detection}, 
            author={Gyusam Chang and Jiwon Lee and Donghyun Kim and Jinkyu Kim and Dongwook Lee and Daehyun Ji and Sujin Jang and Sangpil Kim},
            year={2024},
            eprint={2410.22461},
            archivePrefix={arXiv},
            primaryClass={cs.CV},
            url={https://arxiv.org/abs/2410.22461},}