Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

Control of Markov chains with safety bounds

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Arapostathis, A. ; Dept. of Electr. & Comput. Eng., Univ. of Texas, Austin, TX, USA ; Kumar, R. ; Hsu, S.-P.

In an earlier paper, the authors introduced the notion of safety control of stochastic discrete event systems (DESs), modeled as controlled Markov chains. Safety was specified as an upper bound on the components of the state probability distribution, and the class of irreducible and aperiodic Markov chains were analyzed relative to this safety criterion. Under the assumption of complete state observations: 1) the authors identified the set of all state-feedback controllers that enforce the safety specification, for all safe initial probability distributions and 2) for any given state-feedback controller, the authors constructed the maximal invariant safe set (MISS). In this paper, the authors extend the work in several ways: 1) safety is specified in terms of both upper and lower bounds; 2) we consider a larger class of Markov chains that includes reducible and periodic chains; 3) we present a more general iterative algorithm for computing the MISS, which is quite flexible in its initialization; 4) we obtain an explicit upper bound for the number of iterations needed for the algorithm to terminate. Note to Practitioners-The paper studies "safety" control of stochastic systems modeled as Markov chains. Safety is defined as a requirement that the probability distribution in each state remain bounded between an upper and a lower bound. For example, a financial investment policy should be such that the probability of ever being bankrupt is bounded below by a positive number. Prior works on control of Markov chains have addressed optimality but not safety. A condition is obtained under which a controlled Markov chain is guaranteed to be safe at all times. For those chains that do not satisfy such a condition, a maximal subset of the safe set of distributions is computed so that if the chain is initialized with a distribution in that maximal subset, it remains safe all the times. A condition is obtained under which such a maximal set is nonempty. The computation of such a maximal set is iterative and we provide a condition under which the computation terminates in a finite number of iterations. Manufacturing system examples are included to illustrate the results.

Published in:

Automation Science and Engineering, IEEE Transactions on  (Volume:2 ,  Issue: 4 )