Abstract:
In the field of the Internet of Unmanned Agents (IUA), autonomous devices often struggle to maintain stability, adaptability, and coordination in dynamic environments. Cu...Show MoreMetadata
Abstract:
In the field of the Internet of Unmanned Agents (IUA), autonomous devices often struggle to maintain stability, adaptability, and coordination in dynamic environments. Current control strategies are hindered by difficulties such as real-time adaptation to environmental changes, handling sensor noise, and managing the coordination of multiple agents under complex, variable task constraints. These challenges limit the practical deployment of robots in IUA systems, making it crucial to develop more robust and efficient control methods. To address these issues, we propose a reinforcement learning-based impedance coefficient optimization approach to improve robots’ stability and responsiveness. By integrating impedance control, the Divergent Component of Motion (DCM) model, reinforcement learning (RL), and quadratic programming (QP), this approach dynamically adjusts the robot’s stiffness, damping, and inertia coefficients in real-time. Specifically, the Soft Actor-Critic (SAC) algorithm is used for optimization, while QP ensures that control forces remain feasible across multiple tasks and constraints. This strategy enhances the robots’ adaptability and coordination in complex IUA. Experimental results validate the approach, significantly improving robot performance in IUA systems.
Published in: IEEE Internet of Things Journal ( Early Access )