By Topic

Discrete-time local dynamic programming

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Berniker, M. ; Dept. of Phys. Med. & Rehabilitation, Northwestern Univ., Evanston, IL, USA ; Kording, K.

Optimal control theory is a powerful analytical tool useful for many diverse fields, including biological motor control, where the theory is used to predict characteristics of motor control problems under optimal conditions. However, finding solutions to these control problems can be very dif ficult when examining biological systems, where nonlinearity and stochasticity are typical. In an effort to overcome this dilemma and analyze more realistic problems, we present an algorithm that approximates the solution to the discrete time Hamilton-Jacobi-Bellman equations. As with similar local dynamic programming algorithms, the algorithm approximates a local solution around a nominal trajectory and progressively improves the trajectory and the value function's local esti mate. Using this algorithm, we obtain optimal solutions for a single joint musculo-skeletal system. In particular, we take advantage of this new algorithm to examine solutions with fast and discontinuous dynamics and non-Gaussian noise. These solutions are examined for some of the stereotypical responses of biological systems, such as the tri-phasic muscle activations and bell-shaped velocity profiles. The results are also compared with their deterministic counterparts, emphasizing the need for stochastic solutions.

Published in:

American Control Conference (ACC), 2011

Date of Conference:

June 29 2011-July 1 2011