Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation | IEEE Conference Publication | IEEE Xplore

Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation


Abstract:

Accurate moving object segmentation is an es-sential task for autonomous driving. It can provide effective information for many downstream tasks, such as collision avoida...Show More

Abstract:

Accurate moving object segmentation is an es-sential task for autonomous driving. It can provide effective information for many downstream tasks, such as collision avoidance, path planning, and static map construction. How to effectively exploit the spatial-temporal information is a critical question for 3D LiDAR moving object segmentation (LiDAR-MOS). In this work, we propose a novel deep neural network exploiting both spatial-temporal information and different representation modalities of LiDAR scans to improve LiDAR-MOS performance. Specifically, we first use a range image-based dual-branch structure to separately deal with spatial and temporal information that can be obtained from sequential LiDAR scans, and later combine them using motion-guided attention modules. We also use a point refinement module via 3D sparse convolution to fuse the information from both LiDAR range image and point cloud representations and reduce the artifacts on the borders of the objects. We verify the effectiveness of our proposed approach on the LiDAR-MOS benchmark of SemanticKITTI. Our method outperforms the state-of-the-art methods significantly in terms of LiDAR-MOS IoU. Benefiting from the devised coarse-to-fine architecture, our method operates online at sensor frame rate. Code is available at: https://github.com/haomo-ai/MotionSeg3D.
Date of Conference: 23-27 October 2022
Date Added to IEEE Xplore: 26 December 2022
ISBN Information:

ISSN Information:

Conference Location: Kyoto, Japan

Contact IEEE to Subscribe

References

References is not available for this document.