Skip to Main Content
In this paper, we investigate an integration of a best population pool and social learning, utilising evolutionary neural networks. The experiments are divided into several intervals, and we keep the best player from each interval in the best population pool (BP-pool). Social learning allows poor performing players to learn from those players, which are playing at a higher level. The feed forward neural networks are evolved via evolution strategies and no knowledge is incorporated into the players. The evolved neural network players play against a rule-based player, Gondo, at the beginning of the match. The remainder of the games then copied by another Gondo and they continue the game by playing against themselves. Our results demonstrate that learning is taking place.