By Topic

Computational Intelligence and AI in Games, IEEE Transactions on

Issue 1 • Date March 2010

Filter Results

Displaying Results 1 - 10 of 10
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (100 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Computational Intelligence and AI in Games publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • Evolutionary Game Design

    Page(s): 1 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1190 KB) |  | HTML iconHTML  

    It is easy to create new combinatorial games but more difficult to predict those that will interest human players. We examine the concept of game quality, its automated measurement through self-play simulations, and its use in the evolutionary search for new high-quality games. A general game system called Ludi is described and experiments conducted to test its ability to synthesize and evaluate new games. Results demonstrate the validity of the approach through the automated creation of novel, interesting, and publishable games. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RL-DOT: A Reinforcement Learning NPC Team for Playing Domination Games

    Page(s): 17 - 26
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (764 KB) |  | HTML iconHTML  

    In this paper, we describe the design of reinforcement-learning-based domination team (RL-DOT), a nonplayer character (NPC) team for playing Unreal Tournament (UT) Domination games. In RL-DOT, there is a commander NPC and several soldier NPCs. The running process of RL-DOT consists of several decision cycles. In each decision cycle, the commander NPC makes a decision of troop distribution and, according to that decision, sends action orders to other soldier NPCs. Each soldier NPC tries to accomplish its task in a goal-directed way, i.e., decomposing the final ultimate task (attacking or defending a domination point) into basic actions (such as running and shooting) that are directly supported by UT application programming interfaces (APIs). We use a Q-learning-style algorithm to learn the optimal decision-making policy. We carefully choose some opponent policies for our illustrative experiments. In these experiments, RL-DOT shows a distinct learning characteristic, which illustrates its efficiency in playing UT Domination games. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Moving-Target Pursuit Algorithm Using Improved Tracking Strategy

    Page(s): 27 - 39
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2293 KB) |  | HTML iconHTML  

    Pursuing a moving target in modern computer games presents several challenges to situated agents, including real-time response, large-scale search space, severely limited computation resources, incomplete environmental knowledge, adversarial escaping strategy, and outsmarting the opponent. In this paper, we propose a novel tracking automatic optimization moving-target pursuit (TAO-MTP) algorithm employing improved tracking strategy to effectively address all challenges above for the problem involving single hunter and single prey. TAO-MTP uses a queue to store prey's trajectory, and simultaneously runs real-time adaptive A* (RTAA*) repeatedly to approach the optimal position updated periodically in the trajectory within limited steps, which makes the overall pursuit cost smallest. In the process, the hunter speculatively moves to any position explored in the trajectory, not necessarily the optimal position, to speed up convergence, and then directly moves along the trajectory to pursue the prey. Moreover, automatic optimization methods, such as reducing trajectory storage and optimizing pursuit path, are used to further enhance its performance. As long as the hunter's moving speed is faster than that of the prey, and its sense scope is large enough, it will eventually capture the prey. Experiments using commercial game maps show that TAO-MTP is independent of adversarial escaping strategy, and outperforms all the classic and state-of-the-art moving-target pursuit algorithms such as extended moving-target search (eMTS), path refinement moving-target search (PR MTS), moving-target adaptive A* (MTAA*), and generalized adaptive A* (GAA*). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Resource-Limited Nash Memory to Improve an Othello Evaluation Function

    Page(s): 40 - 53
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (665 KB) |  | HTML iconHTML  

    Finding the best strategy for winning a game using self-play or coevolution can be hindered by intransitivity among strategies and a changing fitness landscape. Nash Memory has been proposed as an archive for coevolution, to counter intransitivity and provide a more consistent fitness landscape. A lack of bounds on archive size might impede its use in a large, complex domain, such as the game of Othello, with strategies described by n -tuple networks. This paper demonstrates that even with a bounded-size archive, an evolving population can continue to show progress past the point where self-play no longer can. Characteristics of Nash equilibria are shown to be valuable in the measurement of performance. In addition, a technique for automated selection of features is demonstrated for the n-tuple networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling Player Experience for Content Creation

    Page(s): 54 - 67
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (844 KB) |  | HTML iconHTML  

    In this paper, we use computational intelligence techniques to built quantitative models of player experience for a platform game. The models accurately predict certain key affective states of the player based on both gameplay metrics that relate to the actions performed by the player in the game, and on parameters of the level that was played. For the experiments presented here, a version of the classic Super Mario Bros game is enhanced with parameterizable level generation and gameplay metrics collection. Player pairwise preference data is collected using forced choice questionnaires, and the models are trained using this data and neuroevolutionary preference learning of multilayer perceptrons (MLPs). The derived models will be used to optimize design parameters for particular types of player experience, allowing the designer to automatically generate unique levels that induce the desired experience for the player. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2010 IEEE World Congress on Computational Intelligence

    Page(s): 68
    Save to Project icon | Request Permissions | PDF file iconPDF (2434 KB)  
    Freely Available from IEEE
  • IEEE Computational Intelligence Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Computational Intelligence and AI in Games Information for authors

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (28 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Computational Intelligence and AI in Games (T-CIAIG) publishes archival journal quality original papers in computational intelligence and related areas in artificial intelligence applied to games, including but not limited to videogames, mathematical games, human–computer interactions in games, and games involving physical objects. Emphasis is placed on the use of these methods to improve performance in and understanding of the dynamics of games, as well as gaining insight into the properties of the methods as applied to games. It also includes using games as a platform for building intelligent embedded agents for the real world. Papers connecting games to all areas of computational intelligence and traditional AI are considered.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Simon M. Lucas
School of Computer Science and Electronic Engineering
University of Essex
Colchester, Essex  CO43SQ, U.K.
sml@essex.ac.uk
Phone:+44 1206 872 048
Fax:+44 1206 872 788