On Penalization in Stochastic Multi-Armed Bandits | IEEE Journals & Magazine | IEEE Xplore

On Penalization in Stochastic Multi-Armed Bandits


Abstract:

We study an important variant of the stochastic multi-armed bandit (MAB) problem, which takes penalization into consideration. Instead of directly maximizing cumulative e...Show More

Abstract:

We study an important variant of the stochastic multi-armed bandit (MAB) problem, which takes penalization into consideration. Instead of directly maximizing cumulative expected reward, we need to balance between the total reward and fairness level. In this paper, we present some new insights into MAB and formulate the problem in the penalization framework, where a rigorous penalized regret can be well defined and a more sophisticated regret analysis is possible. Under such a framework, we propose a hard-threshold UCB-like algorithm, which enjoys many merits including the asymptotic fairness, nearly optimal regret, good tradeoff between reward and fairness. Both gap-dependent and gap-independent regret bounds have been established. Multiple insightful comments are given to illustrate the soundness of our theoretical analysis. Numerous experimental results corroborate the theory and show the usefulness of our formulation of the problem and our method to solve it.
Published in: IEEE Transactions on Information Theory ( Volume: 71, Issue: 3, March 2025)
Page(s): 1909 - 1925
Date of Publication: 03 January 2025

ISSN Information:


Contact IEEE to Subscribe

References

References is not available for this document.