Skip to Main Content
Neural networks (NNs) were evolved to learn to play the zero-sum game Othello (also known as reversi) without relying on a-priori or expert knowledge. Such neural networks were able to discover game-playing strategies through co-evolution, where the neural networks just play against themselves across generations. The effect of the spatial processing layer on evolution was investigated. It was found that the evolutionary process was crucially dependent on the way in which spatial information was presented. A simple sampling pattern based on the squares attacked by a single queen in Chess resulted the networks converging to a solution in which the majority of networks, handicapped by playing black and playing without using any look-ahead algorithm, could defeat a positional strategy using look-ahead at plydepth=4 and a piece-differential strategy using look-ahead at ply-depth=6. Improvement and convergence was observed to be accompanied by an gradual increase in the survival time of neural network strategies from less than 10 generations to about 600 generations. Surprisingly, evolved neural networks had difficulty in defeating a simple mobility strategy playing at a ply-depth=2. This work suggests that in deciding a suitable way to spatially sample a board position, it is important to consider the rules of the game.