Skip to Main Content
A significant amelioration of our previous work is presented aiming at building autonomously visual models of unknown objects, with a humanoid robot. Previously we introduced a Next-Best-View solution using two stages: (i) an optimization algorithm without derivatives finds a camera pose maximizing the amount of unknown data visible, and (ii) a whole robot posture is generated with a different optimization method where the computed camera pose is set as a constraint on the robot head. The original algorithm is modified in order to improve the robustness while broadening the cases that can be handeld. More specifically, the visibility constraint on the object's landmarks and the unknown quantification is improved while a new constraint is introduced to avoid specific poses.