Skip to Main Content
This paper describes a relative position sensing strategy that fuses monocular vision (a bearing measurement) with accelerometer and rate gyro measurements to generate an estimate of relative position between a free-floating underwater vehicle and a stationary object of interest. This type of position estimate is a core requirement for intervention-capable autonomous underwater vehicles. These vehicles can perform autonomous manipulation tasks, during which the vehicle needs to control its position relative to objects in its environment. For free-floating underwater vehicles, camera motion is generally unknown and must be estimated together with relative position. Various vision-only systems have been used to estimate relative position and camera motion, but these are difficult to implement in real underwater environments. The system we propose relies on vision to generate relative position information, but also fuses inertial rate sensors to reduce the amount of information that needs to be extracted from the vision system. The result is a system that potentially is simpler and more robust than a vision-only solution. However, the use of inertial rate sensors introduces several issues. The rate measurements are subject to biases, which need to be estimated to prevent the accumulation of unbounded drift when the measurements are integrated. This problem is non-linear, which presents several challenges in the estimator design. Finally, sufficient camera motion is required for the estimator to converge, which necessitates the design of a suitable trajectory. This paper discusses some of the implementation challenges, outlines an estimation algorithm that is uniquely adopted for this sensor fusion problem, develops a method to generate useful vehicle trajectories, and presents some results from laboratory experiments with a testbed manipulator system. For these experiments, the estimator was implemented as part of a closed-loop control system that can perform an object pick-up task.