Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

A visual navigation system for autonomous land vehicles

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
7 Author(s)
Waxman, A.M. ; Boston University, Cummington St., Boston, MA, USA ; LeMoigne, J. ; Davis, L.S. ; Srinivasan, B.
more authors

A modular system architecture has been developed to support visual navigation by an autonomous land vehicle. The system consists of vision modules performing image processing, three-dimensional shape recovery, and geometric reasoning, as well as modules for planning, navigating, and piloting. The system runs in two distinct modes, bootstrap and feedforward. The bootstrap mode requires analysis of entire images to find and model the objects of interest in the scene (e.g., roads). In the feedforward mode (while the vehicle is moving), attention is focused on small parts of the visual field as determined by prior views of the scene, to continue to track and model the objects of interest. General navigational tasks are decomposed into three categories, all of which contribute to planning a vehicle path. They are called long-, intermediate-, and short-range navigation, reflecting the scale to which they apply. The system has been implemented as a set of concurrent communicating modules and used to drive a camera (carried by a robot arm) over a scale model road network on a terrain board. A large subset of the system has been reimplemented on a VICOM image processor and has driven the DARPA Autonomous Land Vehicle (ALV) at Martin Marietta's test site in Denver, CO.

Published in:

Robotics and Automation, IEEE Journal of  (Volume:3 ,  Issue: 2 )