Skip to Main Content
The aim of general game playing (GGP) is to create intelligent agents that can automatically learn how to play many different games at an expert level without any human intervention. The traditional design model for GGP agents has been to use a minimax-based game-tree search augmented with an automatically learned heuristic evaluation function. The first successful GGP agents all followed that approach. In this paper, we describe CadiaPlayer, a GGP agent employing a radically different approach: instead of a traditional game-tree search, it uses Monte Carlo simulations for its move decisions. Furthermore, we empirically evaluate different simulation-based approaches on a wide variety of games, introduce a domain-independent enhancement for automatically learning search-control knowledge to guide the simulation playouts, and show how to adapt the simulation searches to be more effective in single-agent games. CadiaPlayer has already proven its effectiveness by winning the 2007 and 2008 Association for the Advancement of Artificial Intelligence (AAAI) GGP competitions.
Computational Intelligence and AI in Games, IEEE Transactions on (Volume:1 , Issue: 1 )
Date of Publication: March 2009