Lifted-Rollout for Approximate Policy Iteration of Markov Decision Process | IEEE Conference Publication | IEEE Xplore

Lifted-Rollout for Approximate Policy Iteration of Markov Decision Process


Abstract:

Sampling-based approximate policy iteration, which samples (or "rollout") the current policy and find improvement from the samples, is an efficient and practical approach...Show More

Abstract:

Sampling-based approximate policy iteration, which samples (or "rollout") the current policy and find improvement from the samples, is an efficient and practical approach for solving policies in Markov decision process. Such an approach, however, suffers from the inherent variance of sampling. In this paper, we propose the lifted-rollout approach. This approach models the decision process using a directed a cyclic graph and then lifts the possibly huge graph by compressing similar nodes. Finally the approximate policy is obtained by inference on the lifted graph. Experiments show that our approach avoids the sampling variance and achieves significantly better performance.
Date of Conference: 11-11 December 2011
Date Added to IEEE Xplore: 23 January 2012
ISBN Information:

ISSN Information:

Conference Location: Vancouver, BC, Canada

Contact IEEE to Subscribe

References

References is not available for this document.