By Topic

A Node-based Parallel Game Tree Algorithm Using GPUs

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)
Liang Li ; Inst. of Comput. Technol., Beijing, China ; Hong Liu ; Peiyu Liu ; Taoying Liu
more authors

Game tree search is a classical problem in the field of game theory and artificial intelligence. Fast game tree algorithm is critical for computer games asking for real-time responses. In this paper, we focus on how to leverage massive parallelism capabilities of GPUs to accelerate the speed of game tree algorithms and propose a concise and general parallel game tree algorithm on GPUs. The performance model of the algorithm is presented and analyzed theoretically. We also implement the algorithm for a real computer game called Connect6 and use it to verify the effectiveness and efficiency of our algorithm. Experiments support our theoretical results and show good performance of our approach. Compared to classical CPU-based game tree algorithms, our algorithm can achieve speedup of 70.8 in case of no pruning. When pruning is considered (which means the practical performance of our algorithm), the speedup can reach about 7.0. The insight of our work is that using GPUs is a feasible way to improve the performance of game tree algorithms.

Published in:

Cluster Computing (CLUSTER), 2012 IEEE International Conference on

Date of Conference:

24-28 Sept. 2012