By Topic

Application of Artificial Neural Network Based on Q-learning for Mobile Robot Path Planning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Caihong Li ; School of Computer Science and Technology, Shandong University of Technology, Zibo, Shandong province, china, School of Control Science and Engineering, Shandong University, Jinan, Shandong province, china, ; Jingyuan Zhang ; Yibin Li

Path planning is a difficult part of the navigation task for the mobile robot under dynamic and unknown environment. It needs to solve a mapping relationship between the sensing space and the action space. The relationship can be achieved through different ways. But it is difficult to be expressed by an accurate equation. This paper uses multi-layer feedforward artificial neural network (ANN) to construct a path-planning controller by its powerful nonlinear functional approximation. Then the path planning task is simplified to a classified problem which are five state-action mapping relationship. One reinforcement learning method, Q-learning, is used to collect training samples for the ANN controller. At last the trained controller runs in the simulation environment and retrains itself furthermore combining the reinforcement signal during the interaction with the environment. Strategy based on the Combination of ANN and Q-learning is better than using only one of the two methods. The simulation result also shows that the strategy can find the optimal path than using Q-learning only.

Published in:

2006 IEEE International Conference on Information Acquisition

Date of Conference:

20-23 Aug. 2006