By Topic

Sensor based terrain guidance of distributed cooperative mobile robots

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
S. Aderogba ; Intelligent Manuf. Res. Lab., Tennessee State Univ., Nashville, TN, USA ; A. Shirkhodaie

Navigation in outdoor terrain is difficult due to lack of easily and uniquely identifiable landmarks. This problem is further complicated for a system with multiple robots navigating a common terrain. The paper describes a field-capable system for navigation, obstacle avoidance, simulated visual training of mobile robots, group world perception modeling using visual feedback from multiple robots, and fusion of sonar range data with vision information for the purpose of terrain learning. A neural network approach is proposed for fusion of the robots visual feedback. In this approach, each mobile robot is presumed to be equipped with one camera and sonar sensor. In the proposed technique, self-localization of the robots and localization of obstacles are performed based on the visual and sonar feedback from the neural network. Computer simulation of the technique is done with FMCell simulation software an interactive graphical simulation environment. Results of simulation runs illustrating the capabilities of this technique are provided. The technique provides a better and simplified approach for visual servoing of a multi-agent system

Published in:

Southeastcon 2000. Proceedings of the IEEE

Date of Conference: