Skip to Main Content
In recent years, 'Reinforcement Learning' which can acquire reflective and adaptive actions, is becoming the center of attention as a learning method for robotics control. However, there are many unsolved problems that have to be cleared in order to put the method into practical use. One of the problems is the handling of the state space and the action space. Many algorithms of existing reinforcement learning deal with discrete state space and action space. When the unit of search space is rough, a subtle control cannot be achieved (imperfect perception). On the contrary, when the unit of search space is too fine, searching space is enlarged accordingly and the stable convergence of learning cannot be obtained (curse of dimensionality). In this paper, we propose a nested actor/critic algorithm that can deal with the continuous state and action space. The method proposed in this paper inserts a child actor/critic into the actor part of parent actor/critic algorithm. We examined the proposed algorithm for a stable control problem in both simulation and prototype model of a joint-driven double inverted pendulum.