By Topic

A self-recovery approach to the probabilistic invariance problem for stochastic hybrid systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Prandini, M. ; Dipt. di Elettron. e Inf., Politec. di Milano, Vinci, Italy ; Piroddi, L.

In this paper, we consider the problem of designing a feedback policy for a discrete time stochastic hybrid system that should be kept operating within some compact set A. To this purpose, we introduce an infinite-horizon discounted average reward function, where a negative reward is associated to the transitions driving the system outside A and a positive reward to those leading it back to A. The idea is that the stationary policy maximizing this reward function will keep the system within A as long as possible, and, if the system happens to exit A, it will bring it back to A as soon as possible, compatibly with the system dynamics. This self-recovery approach is particularly useful in those cases where it is not possible to maintain the system within A indefinitely. The performance of the resulting strategy is assessed on a benchmark example.

Published in:

Decision and Control (CDC), 2012 IEEE 51st Annual Conference on

Date of Conference:

10-13 Dec. 2012