By Topic

Reinforcement learning based neuro-control systems for an unmanned helicopter

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Dong Jin Lee ; Div. of Aerosp. Eng., KAIST, Daejeon, South Korea ; Hyochoong Bang

This paper concerns with the autonomous flight control system of an unmanned helicopter, which is combined with reinforcement learning based neuro-controller. We assume that PID (proportional-integral-derivative) type, linear feedback controller is predesigned and it can stabilize the system with limited performance. The conservative control behavior is improved by the synthesis of the poor feedback controller and the neuro-controller. Actor-critic learning architecture is adopted as a learning agent. Actor network consists of feed-forward neural network and critic network is approximated with a tabular function approximator. The Q-value based critic network is trained via SARSA algorithm which is a variant of reinforcement learning. Several demonstrations are performed with a simple first-order system. Furthermore, the proposed neuro-control system is applied to an unmanned helicopter known as a highly nonlinear and complex system and the simulation results are presented.

Published in:

Control Automation and Systems (ICCAS), 2010 International Conference on

Date of Conference:

27-30 Oct. 2010