Loading [MathJax]/extensions/MathZoom.js
YOLOv8-Pointcloud-SLAM3: Visual Dense Point Cloud SLAM for Robot Navigation in Dynamic Environments | IEEE Conference Publication | IEEE Xplore

YOLOv8-Pointcloud-SLAM3: Visual Dense Point Cloud SLAM for Robot Navigation in Dynamic Environments


Abstract:

Simultaneous Localization and Mapping (SLAM) has received widespread attention in fields such as intelligent robots and autonomous driving. However, many current SLAM sys...Show More

Abstract:

Simultaneous Localization and Mapping (SLAM) has received widespread attention in fields such as intelligent robots and autonomous driving. However, many current SLAM systems fail to achieve high positioning accuracy when dealing with moving objects in dynamic environments. Furthermore, many SLAM systems still rely on sparse point clouds, which hinder robots from fully comprehending their surroundings and completing advanced tasks. To address these challenges, this paper proposes YOLOv8-Pointcloud-SLAM3, a visual dense point cloud SLAM approach for robot navigation in dynamic environments. Building upon the ORB-SLAM3 system, the current high-recognition-accuracy deep learning network YOLOv8s is introduced, combined with geometric movement consistency check. Semantic segmentation threads are added to remove dynamic objects, and a 3D dense point cloud thread is also employed, which utilizes dilation masks to eliminate the “ghosting shadow” effect caused by the edges of dynamic object mask edges. Extensive tests on the TUM dataset demonstrates that our proposed YOLOv8-Pointcloud-SLAM3 outperforms current mainstream SLAM systems in both trajectory error and position estimation accuracy.
Date of Conference: 08-10 November 2024
Date Added to IEEE Xplore: 27 December 2024
ISBN Information:
Conference Location: Wuhan, China

Contact IEEE to Subscribe

References

References is not available for this document.