By Topic

On State Aggregation to Approximate Complex Value Functions in Large-Scale Markov Decision Processes

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Qing-Shan Jia ; Center for Intelligent and Networked Systems (CFINS), Department of Automation, TNLIST, Tsinghua University, Beijing, China

Many small electronic devices such as cell phones and wireless sensors have restrictive memory space, computing power, and battery. The pervasive applications of these devices in industry, military, and our daily lives require simple policies that are easy to implement, and can be executed in a reasonably short time. The Markov decision process (MDP) provides a general framework for these policy optimization problems with complexity constraint. In many cases, we can use a powerful computer to find the optimal (or a good) policy and the value function first, and then approximate by a simple one. The approximation usually depends on heuristics or experiences because the relationship between the complexity of a function and the approximation error is not clear in general. In this paper we assume the optimal value function is known (or a reasonably good estimate is available) and consider how to approximate a complex value function. Due to the broad application of state aggregation in the large-scale MDP, we focus on piecewise constant approximate value functions and use the number of aggregated states to measure the complexity of a value function. We quantify how the complexity of a value function affects the approximation error. When the optimal value function is known for sure we develop an algorithm that finds the best simple state aggregation within polynomial time. When we have estimates of the optimal value function, we apply ordinal optimization to find good simple state aggregations with high probability. The algorithms are demonstrated on a node activation policy optimization problem in wireless sensor network. We hope this work can shed some insight on how to find simple policies with good performances.

Published in:

IEEE Transactions on Automatic Control  (Volume:56 ,  Issue: 2 )