Improved Rates for Derivative Free Gradient Play in Strongly Monotone Games | IEEE Conference Publication | IEEE Xplore

Improved Rates for Derivative Free Gradient Play in Strongly Monotone Games


Abstract:

The influential work of Bravo et al. [1] shows that derivative free gradient play in strongly monotone games has complexity O(d2/ε3), where ε is the target accuracy on th...Show More

Abstract:

The influential work of Bravo et al. [1] shows that derivative free gradient play in strongly monotone games has complexity O(d23), where ε is the target accuracy on the expected squared distance to the solution. This paper shows that the efficiency estimate is actually O(d22), which reduces to the known efficiency guarantee for the method in unconstrained optimization. The argument we present simply interprets the method as stochastic gradient play on a slightly perturbed strongly monotone game to achieve the improved rate.
Date of Conference: 06-09 December 2022
Date Added to IEEE Xplore: 10 January 2023
ISBN Information:

ISSN Information:

Conference Location: Cancun, Mexico
Related Articles are not available for this document.

I. Introduction

Game theoretic abstractions are foundational in many application domains ranging from machine learning to reinforcement learning to control theory. For instance, in machine learning game theoretic abstractions are used to develop solutions to learning from adversarial or otherwise strategically generated data (see, e.g., [2]–[4]). Analogously, in reinforcement learning and control theory, game theoretic abstractions are used to develop robust algorithms and policies (see, e.g., [5]–[9]). Additionally, they are used to capture interactions between multiple decision making entities and to model asymmetric information and incentive problems (see, e.g., [7], [8], [10]).

Contact IEEE to Subscribe

References

References is not available for this document.