COSEM: Collaborative Semantic Map Matching Framework for Autonomous Robots | IEEE Journals & Magazine | IEEE Xplore

COSEM: Collaborative Semantic Map Matching Framework for Autonomous Robots


Abstract:

Relative localization is a fundamental requirement for the coordination of multiple robots. To date, existing research in relative localization mainly depends on the extr...Show More

Abstract:

Relative localization is a fundamental requirement for the coordination of multiple robots. To date, existing research in relative localization mainly depends on the extraction of low-level geometry features such as planes, lines, and points, which may fail in challenging cases when the initial error is large and the overlapping area is low. In this article, a novel approach named collaborative semantic map matching (COSEM) is proposed to estimate the relative transformation between robots. COSEM jointly performs multimodal information fusion, semantic data association, and optimization in a unified framework. First, each robot applies a multimodal information fusion model to generate local semantic maps. Since the correspondences between local maps are latent variables, a flexible semantic data association strategy is proposed using expectation-maximization. Instead of assigning hard geometry data association, semantic association and geometry association are jointly estimated. Then, the minimization of the expected cost results in a rigid transformation matrix between two semantic maps. Evaluations on semantic KITTI benchmarks and real-world experiments show the improved accuracy, convergence, and robustness.
Published in: IEEE Transactions on Industrial Electronics ( Volume: 69, Issue: 4, April 2022)
Page(s): 3843 - 3853
Date of Publication: 14 April 2021

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.