Skip to Main Content
We present an integrated and fully autonomous eye-in-hand system for 3D object modeling. The system hardware consists of a laser range sensor mounted on a six-DOF manipulator arm and the task is to autonomously build 3D model of an object in-situ, i.e., the object may not be moved and must be scanned in its original location. Our system assumes no knowledge of either the object or the rest of the workspace of the robot. The overall planner integrates a next best view (NBV) algorithm along with a sensor-based roadmap planner. Our NBV algorithm while considering the key constraints such as FOV, viewing angle, overlap and occlusion, efficiently searches the five-dimensional view space to determine the best modeling view configuration. The sensor-based roadmap planner determines a collision-free path, to move the manipulator so that the wrist mounted scanner is at the view configuration. If the desired view configurations are not collision free, or there is no free path to reach them, the planner explores the workspace such that facilitates the modeling. This is repeated until the entire object is scanned. We have implemented the system and our results show that system is able to autonomously build a 3D model of an object in an unknown environment.