MR-COGraphs: Communication-Efficient Multi-Robot Open-Vocabulary Mapping System via 3D Scene Graphs | IEEE Journals & Magazine | IEEE Xplore

MR-COGraphs: Communication-Efficient Multi-Robot Open-Vocabulary Mapping System via 3D Scene Graphs


Abstract:

Collaborative perception in unknown environments is crucial for multi-robot systems. With the emergence of foundation models, robots can now not only perceive geometric i...Show More

Abstract:

Collaborative perception in unknown environments is crucial for multi-robot systems. With the emergence of foundation models, robots can now not only perceive geometric information but also achieve open-vocabulary scene understanding. However, existing map representations that support open-vocabulary queries often involve large data volumes, which becomes a bottleneck for multi-robot transmission in communication-limited environments. To address this challenge, we develop a method to construct a graph-structured 3D representation called COGraph, where nodes represent objects with semantic features and edges capture their spatial adjacency relationships. Before transmission, a data-driven feature encoder is applied to compress the feature dimensions of the COGraph. Upon receiving COGraphs from other robots, the semantic features of each node are recovered using a decoder. We also propose a feature-based approach for place recognition and translation estimation, enabling the merging of local COGraphs into a unified global map. We validate our framework on two realistic datasets and the real-world environment. The results demonstrate that, compared to existing baselines for open-vocabulary map construction, our framework reduces the data volume by over 80% while maintaining mapping and query performance without compromise.
Published in: IEEE Robotics and Automation Letters ( Volume: 10, Issue: 6, June 2025)
Page(s): 5713 - 5720
Date of Publication: 16 April 2025

ISSN Information:

Funding Agency:

Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Pengcheng Laboratory, Shenzhen, China
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Department of Electronic Engineering, Tsinghua University, Beijing, China
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Department of Computer Science, University of Science and Technology of China, Hefei, China
Openmind, Wuhu, China
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Department of Electronic Engineering, Tsinghua University, Beijing, China
Pengcheng Laboratory, Shenzhen, Guangdong, China
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Department of Electronic Engineering, Tsinghua University, Beijing, China

Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Pengcheng Laboratory, Shenzhen, China
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Department of Electronic Engineering, Tsinghua University, Beijing, China
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Department of Computer Science, University of Science and Technology of China, Hefei, China
Openmind, Wuhu, China
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Department of Electronic Engineering, Tsinghua University, Beijing, China
Pengcheng Laboratory, Shenzhen, Guangdong, China
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Department of Electronic Engineering, Tsinghua University, Beijing, China

Contact IEEE to Subscribe

References

References is not available for this document.