Skip to Main Content
On-policy reinforcement learning provides online adaptation, a characteristic of intelligent systems and lifelong learning. Unlike dynamic programming, an exhaustive sweep of the search space is not necessary for convergence in reinforcement learning with an efficient exploration strategy. For efficient and "believable" online performance, an exploration strategy also has to avoid cycling through previous solutions and know when to stop without getting stuck in a local optimum. This paper addresses the above problem with tabu search (TS) exploration. Several strategies for reinforcement learning are introduced. Experimental results are presented in the game of Go, a deterministic, perfect-information two-player game, and Sarsa learning vector quantization (SLVQ), an on-policy reinforcement learning algorithm.