Skip to Main Content
A novel approach is presented which aims at building autonomously visual models of unknown objects, using a humanoid robot. Previous methods have been proposed for the specific problem of the next-best-view during the modeling and the recognition process. However our approach differs as it takes advantage of humanoid specificities in terms of embedded vision sensor and redundant motion capabilities. In a previous work, another approach to this specific problem was presented which relies on a derivable formulation of the visual evaluation in order to integrate it with our posture generation method. However to get rid of some limitations we propose a new method, formulated using two steps: (i) an optimization algorithm without derivatives is used to find a camera pose which maximizes the amount of unknown data visible, and (ii) a whole robot posture is generated by using a different optimization method where the computed camera pose is set as a constraint on the robot head.