Abstract:
We combine three threads of research on approximate dynamic programming: sparse random sampling of states, value function and policy approximation using local models, and...Show MoreMetadata
Abstract:
We combine three threads of research on approximate dynamic programming: sparse random sampling of states, value function and policy approximation using local models, and using local trajectory optimizers to globally optimize a policy and associated value function. Our focus is on finding steady-state policies for deterministic time-invariant discrete time control problems with continuous states and actions often found in robotics. In this paper, we describe our approach and provide initial results on several simulated robotics problems.
Published in: IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) ( Volume: 38, Issue: 4, August 2008)
Citations are not available for this document.