Skip to Main Content
In this paper, we address the autonomous control of a 3-dimensional snake-like robot by using reinforcement learning, and we apply it in the case of rubble. In general, snake-like robots have high mobility that is realized by many degrees of freedom, and they can move on rubble. However, the many degrees of freedom cause the state explosion problem, and the complexity of the rubble results in incomplete learning. Therefore, it is impossible to apply reinforcement learning to conventional snake-like robots that move on rubble. In this paper, to solve these problems, we focus on properties of the real environment and the dynamics of a mechanical body. We design the body of the robot for abstracting the necessary small state-action space by considering real-world properties, and we make it possible to apply reinforcement learning. To demonstrate the effectiveness of the proposed snake-like robot, we conducted experiments where learning was completed within reasonable time and the robot effectively adapted itself to an unknown 3-dimensional environment.