Loading [a11y]/accessibility-menu.js
Dynamic Event-Triggered Reinforcement Learning-Based Consensus Tracking of Nonlinear Multi-Agent Systems | IEEE Journals & Magazine | IEEE Xplore

Dynamic Event-Triggered Reinforcement Learning-Based Consensus Tracking of Nonlinear Multi-Agent Systems


Abstract:

In this paper, we present a novel approach to address the event-triggered optimized consensus tracking control problem in a class of uncertain nonlinear multi-agent syste...Show More

Abstract:

In this paper, we present a novel approach to address the event-triggered optimized consensus tracking control problem in a class of uncertain nonlinear multi-agent systems (MASs). To optimize control performance, we employ an adaptive reinforcement learning (RL) algorithm based on the actor-critic architecture and utilize the backstepping method. The proposed RL-based optimized controller employs a novel event-triggered strategy, dynamically adjusting sampling errors online to reduce communication resource usage and computational complexity through the intermittent transmission of state signals. We establish the boundedness of all signals in the closed-loop MAS through stability analysis using the Lyapunov method, and demonstrate the prevention of Zeno behavior. Numerical simulations of a practical multi-electromechanical system are provided to validate the effectiveness of the proposed scheme.
Page(s): 2120 - 2132
Date of Publication: 22 February 2023

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.