By Topic

Neural-network-based reinforcement learning controller for nonlinear systems with non-symmetric dead-zone inputs

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Xin Zhang ; Sch. of Inf. Sci. & Eng., Northeastern Univ., Shenyang ; Huaguang Zhang ; Derong Liu ; Yongsu Kim

A novel adaptive-critic-based NN controller using reinforcement learning is developed for a class of nonlinear systems with non-symmetric dead-zone inputs. The adaptive critic NN controller uses two NNs: the critic NN is used to approximate the strategic utility function, and the output of action NN is used to approximate the unknown nonlinear function and to minimize the strategic utility function. The tuning of the NNs is performed online without an explicit offline learning phase. The uniformly ultimate boundedness of the close-loop tracking error is derived by using using the Lyapunov method. Finally, a numerical example is included to show the effectiveness of the theoretical results.

Published in:

Adaptive Dynamic Programming and Reinforcement Learning, 2009. ADPRL '09. IEEE Symposium on

Date of Conference:

March 30 2009-April 2 2009