Skip to Main Content
This paper presents an image-based visual servoing approach for a mobile manipulation task, in which a mobile robot has to move towards an object located on a table (docking) and then picks up that object with its gripper. The robot's vision system consists of a pan-tilt camera that is used to keep track of the object and the edge of the table. A minimal number of state variables are extracted from the vision system, and a reactive controller is used to implement the docking behaviour, without requiring any geometric model of the scene. The main aim of the work was to develop a practical reinforcement learning scheme to automatically acquire a high-performance controller in a short training time (less than 1 hour) on the real robot. We compare a number of control algorithms, including a hand-designed linear controller, a novel reinforcement learning algorithm for mobile robots, and a scheme using the linear controller as a bias to accelerate reinforcement learning. By experimental analysis of the controllability and docking time, we found that the biased learning system could improve on the performance of the linear controller, while requiring substantially lower training time than unbiased learning.