Skip to Main Content
We propose an algorithm for the visual detection and localisation of the hand of a humanoid robot. This algorithm imposes low requirements on the type of supervision required to achieve good performance. In particular the system performs feature selection and adaptation using images that are only labelled as containing the hand or not, without any explicit segmentation. Our algorithm is an online variant of Multiple Instance Learning based on boosting. Experiments in real-world conditions on the iCub humanoid robot confirm that the algorithm can learn the visual appearance of the hand, reaching an accuracy comparable with its off-line version. This remains true when supervision is generated by the robot itself in a completely autonomous fashion. Algorithms with weak supervision requirements like the one we describe are useful for autonomous robots that learn and adapt online to a changing environment. The algorithm is not hand-specific and could be easily applied to wide range of problems involving visual recognition of generic objects.
Date of Conference: 25-30 Sept. 2011