By Topic

Constrained stochastic control with probabilistic criteria and search optimization

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
R. C. Chen ; Naval Res. Lab., Washington, DC, USA

The dynamic programming approach is applied to both fully and partially observed constrained Markov process control problems with both probabilistic and total cost criteria that are motivated by the optimal search problem. For the fully observed case, point-wise convergence of the optimal cost function for the finite horizon problem to that of the infinite horizon problem is shown. For the partially observed case, a constrained finite horizon problem with both probabilistic and expected total cost criteria is formulated that is demonstrated to be applicable to the radar search problem. This formulation allows the explicit inclusion of certain probability of detection and probability of false alarm criteria, and consequently it allows integration of control and detection objectives. This is illustrated by formulating an optimal truncated sequential detection problem involving minimization of resources required to achieve specified levels of probability of detection and probability of false alarm. A simple example of optimal truncated sequential detection that represents the optimization of a radar detection process is given.

Published in:

Decision and Control, 2004. CDC. 43rd IEEE Conference on  (Volume:3 )

Date of Conference:

14-17 Dec. 2004