Skip to Main Content
The objective of this paper is to improve the visual odometry performance through the analysis of the sensor noise and the propagation of an error through the entire visual odometry system. The visual odometry algorithm is implemented on an indoor, wheeled mobile robot (WMR) constrained to planar motion, and uses an integrated color-depth (RGB-D) camera, and a one-point (1-pt), 3 degree-of-freedom inverse kinematic solution, enabling a closed-form bound on the propagated error. There are three main contributions of this paper. First, feature location errors for the RGB-D camera are quantified. Second, these feature location errors are propagated through the entire visual odometry algorithm. Third, the visual odometry performance is improved by using the predicted error to weight individual 1-pt solutions. The error bounds and the improved visual odometry scheme are experimentally verified on a WMR. Using the error-weighting scheme, the proposed visual odometry algorithm achieves the performance of approximately 1.5% error, without the use of iterative, outlier-rejection tools.