By Topic

Robot navigation using an anthropomorphic visual sensor

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
M. Tistarelli ; Dept. of Commun., Comput. & Syst. Sci., Genoa Univ., Italy ; G. Sandini

The use of an anthropomorphic, retinalike visual sensor for navigation tasks is investigated. The main advantage, besides the topological scaling and rotation invariance, stems from the considerable data reduction obtained with nonuniform sampling, in conjunction with high resolution in the part of the field of view corresponding to the focus of attention. Active movements are also considered to be a beneficial feature, solving the depth-from-motion problem and maintaining a 3-D representation of the viewed scene. For short range navigation, a tracking egomotion strategy is adopted which greatly simplifies the motion equations and complements the characteristics of the retinal sensor. An algorithm for the computation of depth from motion is developed for image sequences acquired with the retinal sensor, and an error analysis is carried out to determine the uncertainty of range measurements. An experiment is presented in which depth maps are computed from a sequence of images sampled with the retinalike sensor, building a volumetric representation of the scene

Published in:

Robotics and Automation, 1990. Proceedings., 1990 IEEE International Conference on

Date of Conference:

13-18 May 1990