Skip to Main Content
In the recent robotics, much attention has been focused on utilizing reinforcement learning for designing robot controllers. However, there still exists difficulties, one of them is well known as state space explosion problem. As the state space for a learning system becomes continuous and high dimensional, its combinational state space exponentially explodes and the learning process is time consuming. In this paper, we propose an adaptive state space recruitment strategy for reinforcement learning, which enables the system to divide state space gradually according to task complexity and progress of learning. Some simulation results and real robot implementation show the validity of the method.