Abstract:
Accurate and consistent ego motion estimation is a critical component of autonomous navigation. For this task, the combination of visual and inertial sensors is an inexpe...Show MoreMetadata
Abstract:
Accurate and consistent ego motion estimation is a critical component of autonomous navigation. For this task, the combination of visual and inertial sensors is an inexpensive, compact, and complementary hardware suite that can be used on many types of vehicles. In this work, we compare two modern approaches to ego motion estimation: the Multi-State Constraint Kalman Filter (MSCKF) and the Sliding Window Filter (SWF). Both filters use an Inertial Measurement Unit (IMU) to estimate the motion of a vehicle and then correct this estimate with observations of salient features from a monocular camera. While the SWF estimates feature positions as part of the filter state itself, the MSCKF optimizes feature positions in a separate procedure without including them in the filter state. We present experimental characterizations and comparisons of the MSCKF and SWF on data from a moving hand-held sensor rig, as well as several traverses from the KITTI dataset. In particular, we compare the accuracy and consistency of the two filters, and analyze the effect of feature track length and feature density on the performance of each filter. In general, our results show the SWF to be more accurate and less sensitive to tuning parameters than the MSCKF. However, the MSCKF is computationally cheaper, has good consistency properties, and improves in accuracy as more features are tracked.
Published in: 2015 12th Conference on Computer and Robot Vision
Date of Conference: 03-05 June 2015
Date Added to IEEE Xplore: 16 July 2015
Electronic ISBN:978-1-4799-1986-4