I. Introduction
Vision guided robotics has been one of the major research issues in the last years, since the applications of visually guided systems are numerous, e.g. intelligent agents, robotic surgery, exploration rovers and home automation [6], [8]. The use of direct visual information, which is acquired from measurable characteristics of the environment, to provide feedback about the state of the environment itself and to control a robot is commonly termed Visual Servoing, These observed characteristics are usually referred to as features. Features are extracted from the image and their motion is mapped to the velocity twist of the camera via an interaction matrix [8]. The image of the target is a function of the relative pose between the camera and the target and the distance between them is frequently referred to as depth or range.