Skip to Main Content
Robotic underwater vehicles can perform vast optical surveys of the ocean floor. Scientists value these surveys since optical images offer high levels of information and are easily interpreted by humans. Unfortunately the coverage of a single image is limited by absorption and backscatter while what is needed is an overall view of the survey area. Recent work on underwater mosaics assume planar scenes and are applicable only to situations without much relief. We present a complete and validated system for processing optical images acquired from an underwater robotic vehicle to form a 3D reconstruction of the ocean floor. Our approach is designed for the most general conditions of wide-baseline imagery (low overlap and presence of significant 3D structure) and scales to hundreds of images. We only assume a calibrated camera system and a vehicle with uncertain and possibly drifting pose information (e.g. a compass, depth sensor and a Doppler velocity log). Our approach is based on a combination of techniques from computer vision, photogrammetry and robotics. We use a local to global approach to structure from motion, aided by the navigation sensors on the vehicle to generate 3D submaps. These submaps are then placed in a common reference frame that is refined by matching overlapping submaps. The final stage of processing is a bundle adjustment that provides the 3D structure, camera poses and uncertainty estimates in a consistent reference frame. We present results with ground-truth for structure as well as results from an oceanographic survey over a coral reef covering an area of approximately one hundred square meters.