A study was conducted to find out how game-playing strategies for Othello (also known as reversi) can be learned without expert knowledge. The approach used the coevolution of a fixed-architecture neural-network-based evaluation function combined with a standard minimax search algorithm. Comparisons between evolving neural networks and computer players that used deterministic strategies allowed evolution to be observed in real-time. Neural networks evolved to outperform the computer players playing at higher ply-depths, despite being handicapped by playing black and using minimax at ply-depth of two. In addition, the playing ability of the population progressed from novice, to intermediate, and then to master's level. Individual neural networks discovered various game-playing strategies, starting with positional and later mobility. These results show that neural networks can be evolved as evaluation functions, despite the general difficulties associated with this approach. Success in this case was due to a simple spatial preprocessing layer in the neural network that captured spatial information, self-adaptation of every weight and bias of the neural network, and a selection method that allowed a diverse population of neural networks to be carried forward from one generation to the next.