Abstract:
To mitigate the heavy reliance on semantic information and the unreliability of manual feature extraction in dynamic simultaneous localization and mapping (SLAM) and obje...Show MoreMetadata
Abstract:
To mitigate the heavy reliance on semantic information and the unreliability of manual feature extraction in dynamic simultaneous localization and mapping (SLAM) and object tracking systems, a novel visual-inertial SLAM with deep feature extraction matching is proposed. A lightweight network is designed for feature extraction and description, replacing the oriented FAST and rotated BRIEF (ORB) approach, which can address deep learning latency and the shortcomings of manual extraction. A fast object tracking method is developed for extracting and associating dynamic objects based on clustering analyses of feature point pair distances to epipolar lines, which enables real-time tracking of numerous objects without semantic data. An advanced online incremental loop closure detector for deep features is designed, which supersedes ORB-based detectors and maintains global pose optimization. The system’s effectiveness and advantages, including its proficiency in real-time embedded platform execution and enhanced self-localization and dynamic object tracking, have been demonstrated through extensive evaluations. Notably, the system is capable of consistently tracking dynamic objects with rapid movement or weak texture, furnishing the localization system with robust dynamic constraints.
Published in: IEEE Transactions on Instrumentation and Measurement ( Volume: 74)