Abstract:
Although model based H_{\infty } control scheme for nonlinear continuous-time (CT) systems with unknown system dynamics has been extensively studied, model-free $H_{\in...Show MoreMetadata
Abstract:
Although model based H_{\infty } control scheme for nonlinear continuous-time (CT) systems with unknown system dynamics has been extensively studied, model-free H_{\infty } control of nonlinear CT systems via Q-learning is still a challenging problem. This paper develops a novel Q-learning based model-free H_{\infty } control scheme for nonlinear CT systems, where the adaptive critic and actor continuously and simultaneously update each other, eliminating the need for iterative steps. As a result, a hybrid structure is avoided and there is no longer a requirement for an initial stabilizing control policy. To obtain the H_{\infty } control of the CT nonlinear system, the Q-learning strategy is introduced to online resolve the H_{\infty } control problem in a non-iterative approach, where the system dynamics are not required. In addition, a new learning law is further developed by utilizing a sliding mode scheme to online update the critic neural network (NN) weights. Due to the strong convergence of critic NN weights, the actor NN used in most H_{\infty } control algorithms is removed. Finally, numerical simulation and experimental results of an adaptive cruise control (ACC) system based on a real vehicle effectively demonstrate the feasibility of the presented control method and learning algorithm.
Published in: IEEE Transactions on Emerging Topics in Computational Intelligence ( Volume: 9, Issue: 2, April 2025)