By Topic

Application of Reinforcement Learning to autonomous heading control for bionic underwater robots

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Longxin Lin ; Nat. Univ. of Defense Technol., Changsha, China ; Haibin Xie ; Lincheng Shen

The bionic underwater robot propelled by undulating fins is an interesting field in current research on underwater robots. With the prosperous development of bionic underwater robots, its control problem remains big challenging for strong nonlinearity, uncertainty environments, and lack of understanding of dynamic characteristics of undulating fins. As a model-free method, the Q-learning based reinforcement learning achieves its control motivation by interacting with the environment and maximizing a reward, so suits the complicated applications such as robot control. This paper introduced the online Q_learning algorithm to the autonomous heading control for a kind of bionic underwater robot with two undulating fins. The algorithm doesn't need to know any knowledge about the robot, and can learn the internal mapping between states and actions that control behaviors must contain. With the simulation experiments, the validity of reinforcement learning algorithm in autonomous heading control of the bionic underwater robot was validated.

Published in:

Robotics and Biomimetics (ROBIO), 2009 IEEE International Conference on

Date of Conference:

19-23 Dec. 2009