Improving Feature-based Visual SLAM by Semantics | IEEE Conference Publication | IEEE Xplore

Improving Feature-based Visual SLAM by Semantics


Abstract:

Feature-based simultaneous localization and mapping (SLAM) algorithms with additional semantics can have better feature matching and tracking accuracies than the original...Show More

Abstract:

Feature-based simultaneous localization and mapping (SLAM) algorithms with additional semantics can have better feature matching and tracking accuracies than the original SLAM algorithms. Therefore, this paper shows how to improve feature-based SLAM by only matching features from objects of the same semantic class. The basic idea is to use a deep neural network, YOLO (you only look once [1]), to classify objects and to associate features with the objects in whose bounding box they appear, thus giving features the semantic label of these objects. During feature matching of the SLAM algorithms, only features with the same semantic label are matched (e.g. books with books, bottles with bottles etc.), eliminating matches of similar features on different classes of objects. Experiments of classical ORB-SLAM2 with YOLO have been performed on an embedded PC. Additionally, ORB-SLAM2 with different versions of YOLO has also been tested on a powerful desktop GPU as well as on an Nvidia Jetson TX2 board. The experimental results show that using the semantic information given by object recognition methods reduces wrong feature matches in tracking and decreases the tracking lost cases.
Date of Conference: 12-14 December 2018
Date Added to IEEE Xplore: 09 May 2019
ISBN Information:
Conference Location: Sophia Antipolis, France

Contact IEEE to Subscribe

References

References is not available for this document.