Abstract:
Having a large game-tree complexity and being EXPTIME-complete, English Draughts, recently weakly solved during almost two decades, is still hard to learn for intelligent...Show MoreMetadata
Abstract:
Having a large game-tree complexity and being EXPTIME-complete, English Draughts, recently weakly solved during almost two decades, is still hard to learn for intelligent computer agents. In this paper we present a Temporal-Difference method that is nonlinear neural approximated by a 4-layer multi-layer perceptron. We have built multiple English Draughts playing agents, each starting with a randomly initialized strategy, which use this method during self-play to improve their strategies. We show that the agents are learning by comparing their winning-quote relative to their parameters. Our best agent wins versus the computer draughts programs Neuro Draughts, KCheckers and CheckerBoard with the easych engine and looses to Chinook, GuiCheckers and CheckerBoard with the strong cake engine. Overall our best agent has reached an amateur league level.
Date of Conference: 23-26 August 2010
Date Added to IEEE Xplore: 07 October 2010
ISBN Information: