Loading [a11y]/accessibility-menu.js
Bridging the View Disparity Between Radar and Camera Features for Multi-Modal Fusion 3D Object Detection | IEEE Journals & Magazine | IEEE Xplore

Bridging the View Disparity Between Radar and Camera Features for Multi-Modal Fusion 3D Object Detection


Abstract:

Environmental perception with the multi-modal fusion is crucial in autonomous driving to increase accuracy, completeness, and robustness. This paper focuses on utilizing ...Show More

Abstract:

Environmental perception with the multi-modal fusion is crucial in autonomous driving to increase accuracy, completeness, and robustness. This paper focuses on utilizing millimeter-wave (MMW) radar and camera sensor fusion for 3D object detection. A novel method that realizes the feature-level fusion under the bird's-eye view (BEV) for a better feature representation is proposed. Firstly, radar points are augmented with temporal accumulation and sent to a spatial-temporal encoder for radar feature extraction. Meanwhile, multi-scale image 2D features which adapt to various spatial scales are obtained by image backbone and neck model. Then, image features are transformed to BEV with the designed view transformer. In addition, this work fuses the multi-modal features with a two-stage fusion model called point-fusion and ROI-fusion, respectively. Finally, a detection head regresses objects category and 3D locations. Experimental results demonstrate that the proposed method realizes the state-of-the-art (SOTA) performance under the most crucial detection metrics–mean average precision (mAP) and nuScenes detection score (NDS) on the challenging nuScenes dataset.
Published in: IEEE Transactions on Intelligent Vehicles ( Volume: 8, Issue: 2, February 2023)
Page(s): 1523 - 1535
Date of Publication: 27 January 2023

ISSN Information:

Funding Agency:


References

References is not available for this document.