By Topic

An approach to tune fuzzy controllers based on reinforcement learning for autonomous vehicle control

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Xiaohui Dai ; Rockwell Autom. Res. Center, Shanghai, China ; Chi-Kwong Li ; Rad, A.B.

In this paper, we suggest a new approach for tuning parameters of fuzzy controllers based on reinforcement learning. The architecture of the proposed approach is comprised of a Q estimator network (QEN) and a Takagi-Sugeno-type fuzzy inference system (TSK-FIS). Unlike other fuzzy Q-learning approaches that select an optimal action based on finite discrete actions, the proposed controller obtains the control output directly from TSK-FIS. With the proposed architecture, the learning algorithms for all the parameters of the QEN and the FIS are developed based on the temporal-difference (TD) methods as well as the gradient-descent algorithm. The performance of the proposed design technique is illustrated by simulation studies of a vehicle longitudinal-control system.

Published in:

Intelligent Transportation Systems, IEEE Transactions on  (Volume:6 ,  Issue: 3 )