By Topic

Structured Threshold Policies for Dynamic Sensor Scheduling—A Partially Observed Markov Decision Process Approach

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Krishnamurthy, V. ; British Columbia Univ., Vancouver ; Djonin, D.V.

We consider the optimal sensor scheduling problem formulated as a partially observed Markov decision process (POMDP). Due to operational constraints, at each time instant, the scheduler can dynamically select one out of a finite number of sensors and record a noisy measurement of an underlying Markov chain. The aim is to compute the optimal measurement scheduling policy, so as to minimize a cost function comprising of estimation errors and measurement costs. The formulation results in a nonstandard POMDP that is nonlinear in the information state. We give sufficient conditions on the cost function, dynamics of the Markov chain and observation probabilities so that the optimal scheduling policy has a threshold structure with respect to a monotone likelihood ratio (MLR) ordering. As a result, the computational complexity of implementing the optimal scheduling policy is inexpensive. We then present stochastic approximation algorithms for estimating the best linear MLR order threshold policy.

Published in:

Signal Processing, IEEE Transactions on  (Volume:55 ,  Issue: 10 )