By Topic

State-feedback control of Markov chains with safety bounds

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Arapostathis, A. ; Dept. of Electr. & Comput. Eng., Texas Univ., Austin, TX, USA ; Kumar, R. ; Shun-Pin Hsu

In an earlier paper [A. Arapostathis, et al., 2003], we introduced the notion of safety control of stochastic discrete event systems (DESs), modeled as controlled Markov chains. Safety was specified as an upper bound on the components of the state probability distribution, and the class of irreducible and aperiodic Markov chains was analyzed for satisfying such a safety property. Under the assumption of complete state observation, we identified (i) the set of all safety enforcing state-feedback controllers that impose the safety requirement for all safe initial distributions, and (ii) the maximal invariant set of safe distributions for a state-feedback controller. In this paper we extend the work reported in [A. Arapostathis, et al., 2003] in several ways: (i) the safety is specified in terms of both upper and lower bounds; (ii) quite general class of Markov chains is analyzed that does not exclude the reducible or the periodic chains, (iii) a quite general iterative algorithm for computing the maximal invariant set of safe distributions is obtained in which the initial set for iteration can be arbitrary; (iv) explicit upper bound on the number of steps needed for the termination of the iterative algorithm has been obtained.

Published in:

Decision and Control, 2003. Proceedings. 42nd IEEE Conference on  (Volume:6 )

Date of Conference:

9-12 Dec. 2003