By Topic

Robust optimal decision policies for servicing targets in acyclic digraphs

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Cameron Nowzari ; Department of Mechanical and Aerospace Engineering, University of California, San Diego, 92093, USA ; Jorge Cortés

This paper considers a class of scenarios where targets emerge from some known location and move towards some unknown destinations in a weighted acyclic digraph. A decision maker with knowledge of the target positions must decide when preparations should be made at any given destination for their arrival. We show how this problem can be formulated as an optimal stopping problem on a Markov chain, which sets the basis for the introduction of the BEST INVESTMENT ALGORITHM. Our strategy prescribes when investments must be made conditioned on the target's motion along the digraph. We establish the optimality of this policy and examine its robustness against changing conditions of the problem which allows us to identify a sufficient condition that determines whether the solution computed by the BEST INVESTMENT ALGORITHM remains optimal under changes in the problem data. Several simulations illustrate our results.

Published in:

2012 IEEE 51st IEEE Conference on Decision and Control (CDC)

Date of Conference:

10-13 Dec. 2012