Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). We apologize for the inconvenience.
By Topic

Finite horizon analysis of infinite CTMDPs

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Buchholz, P. ; Inf. IV, Tech. Univ. Dortmund, Dortmund, Germany

Continuous Time Markov Decision Processes (CTMDPs) are used to describe optimization problems in many applications including system maintenance and control. Often one is interested in a control strategy or policy to optimize the gain of a system over a finite interval which is denoted as finite horizon. The computation of an ε-optimal policy, i.e., a policy that reaches the optimal gain up to some small ε, is often hindered by state space explosion which means that state spaces of realistic models can be very large or even infinite. The paper presents new algorithms to compute approximately optimal policies for CTMDPs with large or infinite state spaces. The new approach allows one to compute bounds on the achievable gain and a policy to reach the lower bound using a variant of uniformization on a finite subset of the state space. It is also shown how the approach can be applied to models with unbounded rewards or transition rates for which uniformization cannot be applied per se.

Published in:

Dependable Systems and Networks (DSN), 2012 42nd Annual IEEE/IFIP International Conference on

Date of Conference:

25-28 June 2012