Stabilize an Unsupervised Feature Learning for LiDAR-based Place Recognition | IEEE Conference Publication | IEEE Xplore

Stabilize an Unsupervised Feature Learning for LiDAR-based Place Recognition


Abstract:

Place recognition is one of the major challenges for the LiDAR-based effective localization and mapping task. Traditional methods are usually relying on geometry matching...Show More

Abstract:

Place recognition is one of the major challenges for the LiDAR-based effective localization and mapping task. Traditional methods are usually relying on geometry matching to achieve place recognition, where a global geometry map need to be restored. In this paper, we accomplish the place recognition task based on an end-to-end feature learning framework with the LiDAR inputs. This method consists of two core modules, a dynamic octree mapping module that generates local 2D maps with the consideration of the robot's motion; and an unsupervised place feature learning module which is an improved adversarial feature learning network with additional assistance for the long-term place recognition requirement. More specially, in place feature learning, we present an additional Generative Adversarial Network with a designed Conditional Entropy Reduction module to stabilize the feature learning process in an unsupervised manner. We evaluate the proposed method on the Kitti dataset and North Campus Long-Term LiDAR dataset. Experimental results show that the proposed method outperforms state-of-the-art in place recognition tasks under long-term applications. What's more, the feature size and inference efficiency in the proposed method are applicable in real-time performance on practical robotic platforms.
Date of Conference: 01-05 October 2018
Date Added to IEEE Xplore: 06 January 2019
ISBN Information:

ISSN Information:

Conference Location: Madrid, Spain

I. Introduction

In the last decade, the robotic community has had a lot of breakthroughs and developments. One of the major domains that improved substantially is simultaneous localization and mapping (SLAM), which made real-world applications such as autonomous driving possible. Effective localization and mapping highly rely on robust place recognitions (PR) or loop closure detection (LCD) abilities, especially for longterm navigation tasks [1], [2]. This is one of the major challenges for current SLAM systems. Visual place recognition is a PR method that uses a camera for the task of matching two scenes. The problem of visual PR is challenging due to the fact that same scene appears differently under different season or weather conditions. Besides, the same scene place appears different from different viewpoints, which occurs very often during SLAM process because there is no guarantee that a robot will observe each local scene always from the same viewpoint. These are all challenges faced by a robot performing SLAM.

Contact IEEE to Subscribe

References

References is not available for this document.