Skip to Main Content
We present an omnidirectional vision system we have implemented to provide our mobile robot with a fast tracking and robust localization capability. An algorithm is proposed to do reconstruction of the environment from the omnidirectional image and global localization of the robot in the context of the middle size league RoboCup field. This is accomplished by learning a set of visual landmarks such as the goals and the corner posts. Due to the dynamic changing environment and the partial observable of the landmarks, four localization cases are discussed in order to get robust localization performance. Localization is performed using a method that matches the observed landmarks, i.e. color blobs which are extracted from the environment. The advantages of the cylindrical projection are discussed especially considering the characteristic of the visual landmark and the meaning of the blob extraction. The analysis is established on real time experiment with our omnidirectional vision system and the actual mobile robot, the comparative studies are presented and show the feasibility of the method.