By Topic

Convergence of the policy iteration algorithm with applications to queueing networks and their fluid models

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Meyn, S.P. ; Coordinated Sci. Lab., Illinois Univ., Urbana, IL, USA

The average cost optimal control problem is addressed for Markov decision processes with unbounded cost. It is found that the policy iteration algorithm generates a sequence of policies which are c-regular (a strong stability condition), where c is the cost function under consideration. Furthermore, under these conditions the sequence of relative value functions generated by the algorithm is bounded from below, and “nearly” decreasing, from which it follows that the algorithm is always convergent. These results shed new light on the optimal scheduling problem for multiclass queueing networks. Surprisingly, it is found that the formulation of optimal policies for a network is closely linked to the optimal control of its associated fluid model. Moreover, the relative value function for the network control problem is closely related to the value function for the fluid network. These results are surprising since randomness plays such an important role in network performance

Published in:

Decision and Control, 1996., Proceedings of the 35th IEEE Conference on  (Volume:1 )

Date of Conference:

11-13 Dec 1996