Skip to Main Content
In this paper, comparison of the position-based and image-based robot visual servoing methods is investigated, with an emphasis on the system stability, robustness, sensitivity, and dynamic performance in the Cartesian and image spaces. A common comparison framework using both predefined and taught references is defined in the context of the sensory task space robot control approach. The camera, target, and robot modeling errors in the system are considered in the comparison. Both methods are shown to be locally asymptotically stable and locally robust with respect to modeling errors. While the two methods are shown to be comparable and sensitive to the camera and target modeling errors when using predefined references, they are insensitive to these errors when using taught references. However, the system Cartesian and image trajectories and time-to-converge are affected by the camera, target, and robot modeling errors regardless of the type of references. Finally, other fundamental characteristics of the two methods including sensory task space singularity and local minima, motion coupling, and implementation issues are also compared. The comparison results are verified in simulations.