SalsaNext+: A Multimodal-Based Point Cloud Semantic Segmentation With Range and RGB Images | IEEE Journals & Magazine | IEEE Xplore
Multimodal Semantic Segmentation Architecture

Abstract:

Advances in sensor fusion techniques are redefining the landscape of 3D point cloud semantic segmentation, particularly for autonomous driving applications. We propose an...Show More

Abstract:

Advances in sensor fusion techniques are redefining the landscape of 3D point cloud semantic segmentation, particularly for autonomous driving applications. We propose an enhanced approach that leverages the complementary strengths of LiDAR and multi-camera systems. This study introduces two extensions to the state-of-the-art SalsaNext model based only in LiDAR: SalsaNext+RGB, which integrates RGB data into range-view (RV) images, and SalsaNext+PANO, incorporating panoramic images built from multi-camera setups. The proposed methods are evaluated using the SemanticKITTI and Panoptic nuScenes datasets, showing notable improvements in segmentation accuracy. Results indicate that RGB fusion boosts performance with minimal latency, while panoramic integration offers additional gains at the expense of higher computational load. Comparative analyses highlight significant mIoU gains, demonstrating the potential of multimodal sensor fusion for intricate driving scene understanding.
Multimodal Semantic Segmentation Architecture
Published in: IEEE Access ( Volume: 13)
Page(s): 64133 - 64147
Date of Publication: 10 April 2025
Electronic ISSN: 2169-3536

Funding Agency:


References

References is not available for this document.