Multimodal Semantic Segmentation Architecture
Abstract:
Advances in sensor fusion techniques are redefining the landscape of 3D point cloud semantic segmentation, particularly for autonomous driving applications. We propose an...Show MoreMetadata
Abstract:
Advances in sensor fusion techniques are redefining the landscape of 3D point cloud semantic segmentation, particularly for autonomous driving applications. We propose an enhanced approach that leverages the complementary strengths of LiDAR and multi-camera systems. This study introduces two extensions to the state-of-the-art SalsaNext model based only in LiDAR: SalsaNext+RGB, which integrates RGB data into range-view (RV) images, and SalsaNext+PANO, incorporating panoramic images built from multi-camera setups. The proposed methods are evaluated using the SemanticKITTI and Panoptic nuScenes datasets, showing notable improvements in segmentation accuracy. Results indicate that RGB fusion boosts performance with minimal latency, while panoramic integration offers additional gains at the expense of higher computational load. Comparative analyses highlight significant mIoU gains, demonstrating the potential of multimodal sensor fusion for intricate driving scene understanding.
Multimodal Semantic Segmentation Architecture
Published in: IEEE Access ( Volume: 13)