Skip to Main Content
This paper presents a new method for visual homing to be used on a robot moving on the ground plane. A relevant issue in vision-based navigation is the field-of-view constraints of conventional cameras. We overcome this problem by means of omnidirectional vision and we propose a vision-based homing control scheme that relies on the 1D trifocal tensor. The technique employs a reference set of images of the environment previously acquired at different locations and the images taken by the robot during its motion. In order to take advantage of the qualities of omnidirectional vision, we define a purely angle-based approach, without requiring any distance information. This approach, taking the planar motion constraint into account, motivates the use of the 1D trifocal tensor. In particular, the additional geometric constraints enforced by the tensor improve the robustness of the method in the presence of mismatches. The interest of our proposal is that the designed control scheme computes the robot velocities only from angular information, being this very precise information; in addition, we present a procedure that computes the angular relations between all the views even if they are not directly related by feature matches. The feasibility of the proposed approach is supported by the stability analysis and the results from simulations and experiments with real images.