Loading [a11y]/accessibility-menu.js
Decision Making in Monopoly Using a Hybrid Deep Reinforcement Learning Approach | IEEE Journals & Magazine | IEEE Xplore

Decision Making in Monopoly Using a Hybrid Deep Reinforcement Learning Approach


Abstract:

Learning to adapt and make real-time informed decisions in a dynamic and complex environment is a challenging problem. Monopoly is a popular strategic board game that req...Show More

Abstract:

Learning to adapt and make real-time informed decisions in a dynamic and complex environment is a challenging problem. Monopoly is a popular strategic board game that requires players to make multiple decisions during the game. Decision-making in Monopoly involves many real-world elements such as strategizing, luck, and modeling of opponent’s policies. In this paper, we present novel representations for the state and action space for the full version of Monopoly and define an improved reward function. Using these, we show that our deep reinforcement learning agent can learn winning strategies for Monopoly against different fixed-policy agents. In Monopoly, players can take multiple actions even if it is not their. turn to roll the dice. Some of these actions occur more frequently than others, resulting in a skewed distribution that adversely affects the performance of the learning agent. To tackle the non-uniform distribution of actions, we propose a hybrid approach that combines deep reinforcement learning (for frequent but complex decisions) with a fixed-policy approach (for infrequent but straightforward decisions). We develop learning agents using proximal policy optimization (PPO) and double deep Q-learning (DDQN) algorithms and compare the standard approach to our proposed hybrid approach. Experimental results show that our hybrid agents outperform standard agents by 20% in the number of games won against fixed-policy agents. The hybrid PPO agent performs the best with a win rate of 91% against fixed-policy agents.
Page(s): 1335 - 1344
Date of Publication: 16 May 2022
Electronic ISSN: 2471-285X

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.