Skip to Main Content
Self localization and mapping with vision is still an open research field. Since redundancy in the sensing suite is too expensive for consumer-level robots, we base on vision as the main sensing system for SLAM. We approach the problem with 3D data from a trinocular vision system. Past experience shows that problems arise as a consequence of inaccurate modeling of uncertainties; interestingly enough, we found that accuracy in modeling the robot pose uncertainty is much less relevant than for the uncertainty on the sensed data. To overcome the severe limitation of linear and Gaussian approximations, we applied a particle-based description of the inherently non-normal probability density distribution of the sensed data; the aim is to increase the success rate of data association, which we see as the most important problem. The increase in correct data associations reduces the uncertainty in the model and, consequently, in the robot pose, respectively estimated with a hierarchical map decomposition and a six degree of freedom extended Kalman filter. In this paper, we present approaches for particle-based sensor modeling and data association, with a comparative experimental evaluation on real 3D vision data.