By Topic

Computational Intelligence and Games, 2007. CIG 2007. IEEE Symposium on

Date 1-5 April 2007

Filter Results

Displaying Results 1 - 25 of 56
  • [Commentary]

    Publication Year: 2007 , Page(s): nil1 - nil2
    Save to Project icon | Request Permissions | PDF file iconPDF (16 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Program Committee

    Publication Year: 2007 , Page(s): nil3
    Save to Project icon | Request Permissions | PDF file iconPDF (16 KB)  
    Freely Available from IEEE
  • IEEE Symposium on Computational Intelligence and Games (CIG 2007)

    Publication Year: 2007 , Page(s): nil4 - nil7
    Save to Project icon | Request Permissions | PDF file iconPDF (88 KB)  
    Freely Available from IEEE
  • Snooker Robot Player - 20 Years on

    Publication Year: 2007 , Page(s): 1 - 8
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7079 KB) |  | HTML iconHTML  

    This paper describes the Snooker Machine, an intelligent robotic system that was built between late 1985 and early 1988. The project was documented by the BBC over the course of 2 years, "The Snooker Machine" was broadcasted on BBCs territorial channel in the UK on the one hour Q.E.D, science programme of 16th March 1988. This paper summarizes the technical details of the system. It consisted of a vision system, a fuzzy expert system and a robot manipulator. It outlines some of the difficulties that the Snooker Machine had to overcome in playing a game of snooker against a human player. Given the recent interests in developing robotic systems to play pool (Leekie and Greenspan, 2005), (Greenspan, 2006), and (Ghan et al., 2002), this paper looks back at some of these issues. It also outlines some computational intelligence approaches that may lead to solving some of the problems using today's technology View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Micro Robot Hockey Simulator - Game Engine Design

    Publication Year: 2007 , Page(s): 9 - 16
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (8097 KB) |  | HTML iconHTML  

    Like robot soccer, robot hockey is a game played between two teams of robots. A robot hockey simulator has been created, for the purpose of game strategy testing and result visualization. One major modification in robot hockey is the addition of a puck-shooting mechanism to each robot. As a result, the mechanics of interaction between the robots and the hockey puck become a key design issue. This paper describes the simulator design considerations for robotic hockey games. A potential field-based strategy planner is implemented which is used to develop strategies for moving the robots autonomously. The results of the simulation study show both successful cooperation between robots (on the strategy level), and realistic interaction between robots and the puck View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Experiences in a Complex and Competitive Gaming Domain: Reinforcement Learning Meets RoboCup

    Publication Year: 2007 , Page(s): 17 - 23
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1073 KB) |  | HTML iconHTML  

    RoboCup soccer simulation features the challenges of a fully distributed multi-agent domain with continuous state and action spaces, partial observability, as well as noisy perception and action execution. While the application of machine learning techniques in this domain represents a promising idea in itself, the competitive character of RoboCup also evokes the desire to head for the development of learning algorithms that are more than just a proof of concept. In this paper, we report on our experiences and achievements in applying reinforcement learning (RL) methods in the scope of our Brainstormers competition team within the Simulation League of RoboCup during the past years View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extracting NPC behavior from computer games using computer vision and machine learning techniques

    Publication Year: 2007 , Page(s): 24 - 31
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (178 KB) |  | HTML iconHTML  

    We present a first application of a general approach to learn the behavior of NPCs (and other entities) in a game from observing just the graphical output of the game during game play. This allows some understanding of what a human player might be able to learn during game play. The approach uses object tracking and situation-action pairs with the nearest-neighbor rule. For the game of Pong, we were able to predict the correct behavior of the computer controlled components approximately 9 out of 10 times, even if we keep the usage of knowledge about the game (beyond observing the images) at a minimum View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Waving Real Hand Gestures Recorded by Wearable Motion Sensors to a Virtual Car and Driver in a Mixed-Reality Parking Game

    Publication Year: 2007 , Page(s): 32 - 39
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (408 KB) |  | HTML iconHTML  

    We envision to add context awareness and ambient intelligence to edutainment and computer gaming applications in general. This requires mixed-reality setups and ever-higher levels of immersive human-computer interaction. Here, we focus on the automatic recognition of natural human hand gestures recorded by inexpensive, wearable motion sensors. To study the feasibility of our approach, we chose an educational parking game with 3D graphics that employs motion sensors and hand gestures as its sole game controls. Our implementation prototype is based on Java-3D for the graphics display and on our own CRN Toolbox for sensor integration. It shows very promising results in practice regarding game appeal, player satisfaction, extensibility, ease of interfacing to the sensors, and - last but not least - sufficient accuracy of the real-time gesture recognition to allow for smooth game control. An initial quantitative performance evaluation confirms these notions and provides further support for our setup View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptation of Iterated Prisoner's Dilemma Strategies by Evolution and Learning

    Publication Year: 2007 , Page(s): 40 - 47
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (10282 KB) |  | HTML iconHTML  

    This paper examines the performance and adaptability of evolutionary, learning and memetic strategies to different environment settings in the iterated prisoner's dilemma (IPD). A memetic adaptation framework is devised for IPD strategies to exploit the complementary features of evolution and learning. In the paradigm, learning serves as a form of directed search to guide evolutionary strategies to attain good strategy traits while evolution helps to minimize disparity in performance between learning strategies. A cognitive double-loop incremental learning scheme (ILS) that encompasses a perception component, probabilistic revision of strategies and a feedback learning mechanism is also proposed and incorporated into evolution. Simulation results verify that the two techniques, when employed together, are able to complement each other's strengths and compensate each other's weaknesses, leading to the formation of good strategies that adapt and thrive well in complex, dynamic environments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cooperation in Prisoner's Dilemma on Graphs

    Publication Year: 2007 , Page(s): 48 - 55
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (161 KB) |  | HTML iconHTML  

    A combinatorial graph can be used to place a geography on a population of evolving agents. In this paper agents are trained to play prisoner's dilemma while situated on combinatorial graphs. A collection of thirteen different combinatorial graphs is used. The graph always limits which agents can mate during reproduction. Two sets of experiments are performed for each graph: one in which agents only play prisoners dilemma against their neighbors and one in which fitness is evaluated by a round robin tournament among all population members. Populations are evaluated on their level of cooperativeness, the type of play they engage in, and by identifying the type and diversity of strategies that are present. This latter analysis relies on the fingerprinting of players, a representation-independent method of identifying strategies. Changing the combinatorial graph on which a population lives is found to yield statistically significant changes in the character of the evolved populations for all the metrics used View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Information Sharing in the Iterated Prisoner's Dilemma Game

    Publication Year: 2007 , Page(s): 56 - 62
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6332 KB) |  | HTML iconHTML  

    In the iterated prisoner's dilemma (IPD) game, players normally have access to their own history, without being able to communicate global information. In this paper, we introduce information sharing among players of the IPD game. During the co-evolutionary process, players obtain access, through information sharing, to the common strategy adopted by the majority of the population in the previous generation. An extra bit is added to the history portion in the strategy chromosome. This extra bit holds a value of 0 if the decisions to cooperate were greater than the decisions to defect in the last generation and 1 if otherwise. We show that information sharing alters the dynamics of the IPD game View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Comparison of Genetic Programming and Look-up Table Learning for the Game of Spoof

    Publication Year: 2007 , Page(s): 63 - 71
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (423 KB) |  | HTML iconHTML  

    Many games require opponent modeling for optimal performance. The implicit learning and adaptive nature of evolutionary computation techniques offer a natural way to develop and explore models of an opponent's strategy without significant overhead. In this paper, we compare two learning techniques for strategy development in the game of Spoof, a simple guessing game of imperfect information. We compare a genetic programming approach with a look-up table based approach, contrasting the performance of each in different scenarios of the game. Results show both approaches have their advantages, but that the genetic programming approach achieves better performance in scenarios with little public information. We also trial both approaches against opponents who vary their strategy; results showing that the genetic programming approach is better able to respond to strategy changes than the look-up table based approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using a Genetic Algorithm to Explore A*-like Pathfinding Algorithms

    Publication Year: 2007 , Page(s): 72 - 79
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (339 KB) |  | HTML iconHTML  

    We use a genetic algorithm to explore the space of pathfinding algorithms in Lagoon, a 3D naval real-time strategy game and training simulation. To aid in training, Lagoon tries to provide a rich environment with many agents (boats) that maneuver realistically. A*, the traditional pathfinding algorithm in games is computationally expensive when run for many agents and A* paths quickly lose validity as agents move. Although there is a large literature targeted at making A* implementations faster, we want believability and optimal paths may not be believable. In this paper we use a genetic algorithm to search the space of network search algorithms like A* to find new pathfinding algorithms that are near-optimal, fast, and believable. Our results indicate that the genetic algorithm can explore this space well and that novel pathfinding algorithms (found by our genetic algorithm) quickly find near-optimal, more-believable paths in Lagoon View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adversarial Planning Through Strategy Simulation

    Publication Year: 2007 , Page(s): 80 - 87
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB) |  | HTML iconHTML  

    Adversarial planning in highly complex decision domains, such as modern video games, has not yet received much attention from AI researchers. In this paper, we present a planning framework that uses strategy simulation in conjunction with Nash-equilibrium strategy approximation. We apply this framework to an army deployment problem in a real-time strategy game setting and present experimental results that indicate a performance gain over the scripted strategies that the system is built on. This technique provides an automated way of increasing the decision quality of scripted AI systems and is therefore ideally suited for video games and combat simulators View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Co-Evolving Influence Map Tree Based Strategy Game Players

    Publication Year: 2007 , Page(s): 88 - 95
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (9213 KB) |  | HTML iconHTML  

    We investigate the use of genetic algorithms to evolve AI players for real-time strategy games. To overcome the knowledge acquisition bottleneck found in using traditional expert systems, scripts, or decision trees we evolve players through co-evolution. Our game players are implemented as resource allocation systems. Influence map trees are used to analyze the game-state and determine promising places to attack, defend, etc. These spatial objectives are chained to non-spatial objectives (train units, build buildings, gather resources) in a dependency graph. Players are encoded within the individuals of a genetic algorithm and co-evolved against each other, with results showing the production of strategies that are innovative, robust, and capable of defeating a suite of hand-coded opponents View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modelling the Evolution of Cooperative Behavior in Ad Hoc Networks using a Game Based Model

    Publication Year: 2007 , Page(s): 96 - 103
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB) |  | HTML iconHTML  

    In this paper we address the problem of cooperation and selfish behavior in ad hoc networks. We present a new game theory based model to study cooperation between nodes. This model has some similarities with the iterated prisoner's dilemma under the random pairing game. In such game randomly chosen players receive payoffs that depend on the way they behave. The network gaming model includes a simple reputation collection and trust evaluation mechanisms. In our proposition a decision whether to forward or discard a packet is determined by a strategy based on the trust level in the source node of the packet and some general information about behavior of the network. A genetic algorithm (GA) is applied to evolve strategies for the participating nodes. These strategies are targeted to maximize the throughput of the network by enforcing cooperation. Experimental results show that proposed strategy based approach successfully enforces cooperation maximizing the network throughput View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Historical Population in a Coevolutionary System

    Publication Year: 2007 , Page(s): 104 - 111
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB) |  | HTML iconHTML  

    The use of memory in coevolutionary systems is considered an important mechanism to counter the Red Queen effect. Our research involves incorporating a memory population that the coevolving populations compete against to obtain a fitness that is influenced by past generations. This long term fitness then allows the population to have continuous learning that awards individuals that do well against the current populations, as well as previous winning individuals. By allowing continued learning, the individuals in the populations increase their overall ability to play the game of TEMPO, not just to play a single round with the current opposition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effective Use of Transposition Tables in Stochastic Game Tree Search

    Publication Year: 2007 , Page(s): 112 - 116
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5708 KB) |  | HTML iconHTML  

    Transposition tables are one common method to improve an alpha-beta searcher. We present two methods for extending the usage of transposition tables to chance nodes during stochastic game tree search. Empirical results show that these techniques can reduce the search effort of Ballard's Star2 algorithm by 37 percent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Investigation into Tournament Poker Strategy using Evolutionary Algorithms

    Publication Year: 2007 , Page(s): 117 - 124
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB) |  | HTML iconHTML  

    In this paper we assess the hypothesis that a strategy including information related to game-specific factors in a poker tournament performs better than one founded on hand strength knowledge alone. Specifically, we demonstrate that the use of information pertaining to opponents' prior actions, the stage of the tournament, one's chip stack size and seating position all contribute towards a statistically significant improvement in the number of tournaments won. Additionally, we test the hypothesis that a strategy which combines information from all the aforementioned factors performs better than one which employs only a single factor. We show that an evolutionary algorithm is successfully able to resolve conflicting signals from the specified factors, and that the resulting strategies are statistically stronger View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian Opponent Modeling in a Simple Poker Environment

    Publication Year: 2007 , Page(s): 125 - 131
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7048 KB) |  | HTML iconHTML  

    In this paper, we use a simple poker game to investigate Bayesian opponent modeling. Opponents are defined in four distinctive styles, and tactics are developed which defeat each of the respective styles. By analyzing the past actions of each opponent, and comparing to action related probabilities, the most challenging opponent is identified, and the strategy employed is one that aims to counter that player. The opponent modeling player plays well against non-reactive player styles, and also performs well when compared to a player that knows the exact styles of each opponent in advance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computer Strategies for Solitaire Yahtzee

    Publication Year: 2007 , Page(s): 132 - 139
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (9453 KB) |  | HTML iconHTML  

    Solitaire Yahtzee has been solved completely. However, the optimal strategy is not one a human could practically use, and for computer play it requires either a very large database or significant CPU time. We present some refinements to the techniques used to solve solitaire Yahtzee and give a method for analyzing other solitaire strategies and give some examples of this analysis for some non-optimal strategies, including some produced by evolutionary algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Concept Accessibility as Basis for Evolutionary Reinforcement Learning of Dots and Boxes

    Publication Year: 2007 , Page(s): 140 - 145
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (222 KB) |  | HTML iconHTML  

    The challenge of creating teams of agents, which evolve or learn, to solve complex problems is addressed in the combinatorially complex game of dots and boxes (strings and coins). Previous evolutionary reinforcement learning (ERL) systems approaching this task based on dynamic agent populations have shown some degree of success in game play, however are sensitive to conditions and suffer from unstable agent populations under difficult play and poor development against an easier opponent. A novel technique for preserving stability and allowing balance of specialised and generalised rules in an ERL system is presented, motivated by accessibility of concepts in human cognition, as opposed to natural selection through population survivability common to ERL systems. Reinforcement learning in dynamic teams of mutable agents enables play comparable to hand-crafted artificial players. Performance and stability of development is enhanced when a measure of the frequency of reinforcement is separated from the quality measure of rules View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tournament Particle Swarm Optimization

    Publication Year: 2007 , Page(s): 146 - 153
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (9447 KB) |  | HTML iconHTML  

    This paper introduces tournament particle swarm optimization (PSO) as a method to optimize weights of game tree evaluation functions in a competitive environment using particle swarm optimization. This method makes use of tournaments to ensure a fair evaluation of the performance of particles in the swarm, relative to that of other particles. The empirical work presented compares the performance of different tournament methods that can be applied to the tournament PSO, with application to Checkers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • NEAT Particles: Design, Representation, and Animation of Particle System Effects

    Publication Year: 2007 , Page(s): 154 - 160
    Cited by:  Papers (4)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7659 KB) |  | HTML iconHTML  

    Particle systems are a representation, computation, and rendering method for special effects such as fire, smoke, explosions, electricity, water, magic, and many other phenomena. This paper presents NEAT particles, a new design, representation, and animation method for particle systems tailored to real-time effects in video games and simulations. In NEAT particles, the neuroevolution of augmenting topologies (NEAT) method evolves artificial neural networks (ANN) that control the appearance and motion of particles. NEAT particles affords three primary advantages over traditional particle effect development methods. First, it decouples the creation of new particle effects from mathematics and programming, enabling users with little knowledge of either to produce complex effects. Second, it allows content designers to evolve a broader range of effects than typical development tools through a form of interactive evolutionary computation (IEC). And finally, it acts as a concept generator, allowing users to interactively explore the space of possible effects. In the future such a system may allow content to be evolved in the game itself, as it is played View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Stochastic AI Techniques to Achieve Unbounded Resolution in Finite Player Goore Games and its Applications

    Publication Year: 2007 , Page(s): 161 - 167
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7563 KB) |  | HTML iconHTML  

    The Goore Game (GG) introduced by M. L. Tsetlin in 1973 has the fascinating property that it can be resolved in a completely distributed manner with no intercommunication between the players. The game has recently found applications in many domains, including the field of sensor networks and quality-of-service (QoS) routing. In actual implementations of the solution, the players are typically replaced by learning automata (LA). The problem with the existing reported approaches is that the accuracy of the solution achieved is intricately related to the number of players participating in the game -which, in turn, determines the resolution. In other words, an arbitrary accuracy can be obtained only if the game has an infinite number of players. In this paper, we show how we can attain an unbounded accuracy for the GG by utilizing no more than three stochastic learning machines, and by recursively pruning the solution space to guarantee that the retained domain contains the solution to the game with a probability as close to unity as desired. The paper also conjectures on how the solution can be applied to some of the application domains View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.