Abstract:
Currently, multisensor fusion for point cloud semantic segmentation plays a pivotal role in robotics and autonomous driving. Lidar and the camera are two commonly used se...Show MoreMetadata
Abstract:
Currently, multisensor fusion for point cloud semantic segmentation plays a pivotal role in robotics and autonomous driving. Lidar and the camera are two commonly used sensors, each offering different data modalities. However, fusion algorithms leveraging these modalities face significant challenges in achieving effective integration, and the practical application of these methods has yielded unsatisfactory results. To address these issues, this article proposes an object-based semantic fusion algorithm of Lidar and the camera via inverse projection, which effectively integrates the information from both sensors and performs accurate semantic segmentation. We first propose a calibration method for Lidar and the camera based on arc features in the environment, which derives the projection matrix between sensors and enhances the adaptability of the calibration process to environmental features. A multidimensional semantic segmentation algorithm based on inverse projection is designed, which is suitable for both 2-D and 3-D laser point clouds. The segmentation region is obtained by inverse projection of the bounding box, effectively reducing the influence of background points on the segmentation results and improving fusion efficiency. Additionally, distance-adaptive clustering is employed to mitigate the sensitivity of sensor systems to distance and point cloud sparsity. Building on these, we propose the object-based semantic fusion algorithm via inverse projection that exploits perceptual information from both Lidar and camera data. This approach achieves higher accuracy compared to existing Lidar-camera fusion semantic segmentation algorithms. Numerous experiments conducted on the SemanticKITTI dataset demonstrate the superiority of our approach, with a mean intersection over union (mIoU) outperforming the state-of-the-art method by 1.4%. Field experiments further validate the effectiveness of our proposed algorithm.
Published in: IEEE Transactions on Instrumentation and Measurement ( Volume: 74)