Loading web-font TeX/Math/Italic
Decision Making in Monopoly Using a Hybrid Deep Reinforcement Learning Approach | IEEE Journals & Magazine | IEEE Xplore

Decision Making in Monopoly Using a Hybrid Deep Reinforcement Learning Approach


Abstract:

Learning to adapt and make real-time informed decisions in a dynamic and complex environment is a challenging problem. Monopoly is a popular strategic board game that req...Show More

Abstract:

Learning to adapt and make real-time informed decisions in a dynamic and complex environment is a challenging problem. Monopoly is a popular strategic board game that requires players to make multiple decisions during the game. Decision-making in Monopoly involves many real-world elements such as strategizing, luck, and modeling of opponent’s policies. In this paper, we present novel representations for the state and action space for the full version of Monopoly and define an improved reward function. Using these, we show that our deep reinforcement learning agent can learn winning strategies for Monopoly against different fixed-policy agents. In Monopoly, players can take multiple actions even if it is not their. turn to roll the dice. Some of these actions occur more frequently than others, resulting in a skewed distribution that adversely affects the performance of the learning agent. To tackle the non-uniform distribution of actions, we propose a hybrid approach that combines deep reinforcement learning (for frequent but complex decisions) with a fixed-policy approach (for infrequent but straightforward decisions). We develop learning agents using proximal policy optimization (PPO) and double deep Q-learning (DDQN) algorithms and compare the standard approach to our proposed hybrid approach. Experimental results show that our hybrid agents outperform standard agents by 20% in the number of games won against fixed-policy agents. The hybrid PPO agent performs the best with a win rate of 91% against fixed-policy agents.
Page(s): 1335 - 1344
Date of Publication: 16 May 2022
Electronic ISSN: 2471-285X

Funding Agency:


I. Introduction

Despite numerous advances in deep reinforcement learning (DRL), the majority of successes have been in two-player, zero-sum games, where it is guaranteed to converge to an optimal policy [1], such as Chess and Go [2]. Rare (and relatively recent) exceptions include Blade & Soul [3], no-press diplomacy [4], Poker

We note that, even in this case, a two-player version of Texas Hold ’em was initially assumed [5] but later superseded by a multi-player system.

[6], and StarCraft [7], [8]. In particular, there has been little work on agent development for the full 4-player game of Monopoly, despite it being one of the most popular strategic board games in the last 85 years.

Contact IEEE to Subscribe

References

References is not available for this document.