Reinforcement Learning H∞ Optimal Formation Control for Perturbed Multiagent Systems With Nonlinear Faults | IEEE Journals & Magazine | IEEE Xplore

Reinforcement Learning H Optimal Formation Control for Perturbed Multiagent Systems With Nonlinear Faults


Abstract:

This article presents an optimal formation control strategy for multiagent systems based on a reinforcement learning (RL) technique, considering prescribed performance an...Show More

Abstract:

This article presents an optimal formation control strategy for multiagent systems based on a reinforcement learning (RL) technique, considering prescribed performance and unknown nonlinear faults. To optimize the control performance, an RL strategy is introduced based on the identifier-critic–actor-disturbance structure and backstepping frame. The identifier, critic, actor, and disturbance neural networks (NNs) are employed to estimate unknown dynamics, assess system performance, carry out control actions, and derive the worst disturbance strategy, respectively. With the scheme, the persistent excitation requirements are removed by adopting simplified NNs updating laws, which are derived using the gradient descent method toward designed positive functions instead of the square of Bellman residual. For achieving the desired error precision within the prescribed time, a constraining function and an error transformation scheme are employed. In addition, to enhance the system’s robustness, a fault observer is utilized to compensate for the impact of the unknown nonlinear faults. The stability of the closed-loop system is assured, while the prescribed performance is realized. Finally, simulation examples validate the effectiveness of the proposed optimal control strategy.
Page(s): 1935 - 1947
Date of Publication: 24 December 2024

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.