By Topic

Real-time motor control using recurrent neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Dongsung HuH ; University of California, San Diego, La Jolla, 92093 USA ; Emanuel Todorov

Currently, the field of sensory-motor neuroscience lacks a computational model that can replicate real-time control of biological brain. Due to incomplete neural and anatomical data, traditional neural network training methods fail to model the sensory-motor systems. Here we introduce a novel modeling method based on stochastic optimal control framework which is well suited for this purpose. Our controller is implemented with a recurrent neural network (RNN) whose goal is approximating the optimal global control law for the given plant and cost function. We employ a risk-sensitive objective function proposed by Jacobson (1973) for robustness of controller. For maximum optimization efficiency, we introduce a step response sampling method, which minimizes complexity of the optimization problem. We use conjugate gradient descent method for optimization, and gradient is calculated via Pontryagin's maximum principle. In the end, we obtain highly stable and robust RNN controllers that can generate infinite varieties of attractor dynamics of the plant, which are proposed as building blocks of movement generation. We show two such examples, a point attractor based and a limit-cycle based dynamics.

Published in:

2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning

Date of Conference:

March 30 2009-April 2 2009