Skip to Main Content
Game theoretical learning in potential games is a highly active research area stemming from the connection between potential games and distributed optimisation. In many settings an optimisation problem can be represented by a potential game where the optimal solution corresponds to the potential function maximizer. Accordingly, significant research attention has focused on the design of distributed learning algorithms that guarantee convergence to the potential maximizer in potential games. However, there are currently no existing algorithms that provide convergence to the potential function maximiser when utility functions are corrupted by noise. In this paper we rectify this issue by demonstrating that a version of payoff-based loglinear learning guarantees that the only stochastically stable states are potential function maximisers even in noisy settings.
Date of Conference: 12-14 Oct. 2011