Skip to Main Content
In the paper we evaluate two learning methods applied to the ball-in-a-cup game. The first approach is based on imitation learning. The captured trajectory was encoded with Dynamic motion primitives (DMP). The DMP approach allows simple adaptation of the demonstrated trajectory to the robot dynamics. In the second approach, we use reinforcement learning, which allows learning without any previous knowledge of the system or the environment. In contrast to the majority of the previous attempts, we used SASRA learning algorithm. Experimental results for both cases were performed on Mitsubishi PA10 robot arm.