Abstract:
Coordinated navigation of an arbitrary number of robots to an arbitrary number of goals is a big challenge in robotics, often hindered by scalability limitations of exist...Show MoreMetadata
Abstract:
Coordinated navigation of an arbitrary number of robots to an arbitrary number of goals is a big challenge in robotics, often hindered by scalability limitations of existing strategies. This letter introduces a decentralized multi-agent control system using neural network policies trained in simulation. By leveraging permutation invariant neural network architectures and model-free reinforcement learning, our policy enables robots to prioritize varying numbers of collaborating robots and goals in a zero-shot manner without being biased by ordering or limited by a fixed capacity. We validate the task performance and scalability of our policies through experiments in both simulation and real-world settings. Our approach achieves a 10.3% higher success rate in collaborative navigation tasks compared to a policy without a permutation invariant encoder. Additionally, it finds near-optimal solutions for multi-robot navigation problems while being two orders of magnitude faster than an optimization-based centralized controller. We deploy our multi-goal navigation policies on two wheeled-legged quadrupedal robots, which successfully complete a series of multi-goal navigation missions.
Published in: IEEE Robotics and Automation Letters ( Volume: 10, Issue: 2, February 2025)