Skip to Main Content
The standard reinforcement learning algorithms have proven to be effective tools for letting an agent learn from its experiences generated by its interaction with an environment. In this paper an evolutionary approach is proposed to accelerate learning speed in tabular reinforcement learning algorithms. In the proposed approach, in order to accelerate the learning speed of agents, the state-value is not only approximated, but through using the concept of evolutionary algorithms, they are evolved, with extra bonus of giving each agent the opportunity to exchange its knowledge. The proposed evolutionary value function approximation, helps in moving from a single isolated learning stage to cooperative exploration of the search space and accelerating learning speed. The performance of the proposed algorithm is compared with the standard SARSA algorithm and some of its properties are discussed. The experimental analysis confirms that the proposed approach has higher convergent speed with a negligible increase in computational complexity.