By Topic

The Optimal Observability of Partially Observable Markov Decision Processes: Discrete State Space

Sign In

Full text access may be available.

To access full text, please use your member or institutional sign in.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Rezaeian, M. ; Dept. of Electr. & Electron. Eng., Univ. of Melbourne, Melbourne, VIC, Australia ; Ba-Ngu Vo ; Evans, J.S.

We consider autonomous partially observable Markov decision processes where the control action influences the observation process only. Considering entropy as the cost incurred by the Markov information state process, the optimal observability problem is posed as a Markov decision scheduling problem that minimizes the infinite horizon cost. This scheduling problem is shown to be equivalent to minimization of an entropy measure, called estimation entropy which is related to the invariant measure of the information state.

Published in:

Automatic Control, IEEE Transactions on  (Volume:55 ,  Issue: 12 )