Skip to Main Content
An efficient method for global robot localization in a memory of omnidirectional images is presented. This method is valid for indoor and outdoor environments and not restricted to mobile robots. The proposed strategy is purely vision-based and uses as reference a set of prerecorded images (visual memory). The localization consists on finding in the visual memory the image which best fits the current image. We propose a hierarchical process combining global descriptors computed onto cubic interpolation of triangular mesh and patches correlation around Harris corners. To evaluate this method, three large images data sets have been used. Results of the proposed method are compared with those obtained from state-of-the-art techniques by means of 1) accuracy, 2) amount of memorized data required per image and 3) computational cost. The proposed method shows the best compromise in term of those criteria.