Skip to Main Content
In this paper, we describe FootSLAM, a Bayesian estimation approach that achieves simultaneous localization and mapping for pedestrians. FootSLAM uses odometry obtained with foot-mounted inertial sensors. Whereas existing approaches to infrastructure-less pedestrian position determination are either subject to unbounded growth of positioning error, or require either a priori map information, or exteroceptive sensors, such as cameras or light detection and ranging (LIDARs), FootSLAM achieves long-term error stability solely based on inertial sensor measurements. An analysis of the problem based on a dynamic Bayesian network (DBN) model reveals that this surprising result becomes possible by effectively hitchhiking on human perception and cognition. Two extensions to FootSLAM, namely, PlaceSLAM, for incorporating additional measurements or user provided hints, and FeetSLAM, for automated collaborative mapping, are discussed. Experimental data that validate FootSLAM and its extensions are presented. It is foreseeable that the sensors and processing power of future devices such as smartphones are likely to suffice to position the bearer with the same accuracy that FootSLAM achieves with foot-mounted sensors already today.