By Topic

Optimality of greedy policy for a class of standard reward function of restless multi-armed bandit problem

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $31
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Wang, K. ; Sch. of Inf., Wuhan Univ. of Technol., Wuhan, China ; Liu, Q. ; Chen, L.

In this study, the authors consider the restless multi-armed bandit problem, which is one of the most well-studied generalisations of the celebrated stochastic multi-armed bandit problem in decision theory. However, it is known to be PSPACE-Hard to approximate to any non-trivial factor. Thus, the optimality is very difficult to obtain because of its high complexity. A natural method is to obtain the greedy policy considering its stability and simplicity. However, the greedy policy will result in the optimality loss for its intrinsic myopic behaviour generally. In this study, by analysing one class of so-called standard reward function, the authors establish the closed-form condition about the discounted factor β such that the optimality of the greedy policy is guaranteed under the discounted expected reward criterion, especially, the condition β=1 indicating the optimality of the greedy policy under the average accumulative reward criterion. Thus, this kind of standard reward function can easily be used to judge the optimality of the greedy policy without any complicated calculation. Some examples in cognitive radio networks are presented to verify the effectiveness of the mathematical result in judging the optimality of the greedy policy.

Published in:

Signal Processing, IET  (Volume:6 ,  Issue: 6 )