By Topic

Decompose the operational space of FG vision system into parallel virtual planes to support autonomous navigation in dynamic environment

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Habib, M.K. ; Dept. of Mech. Eng., American Univ. in Cairo, Cairo, Egypt

This article introduces the development of a fast 3D active vision system, and a new concept based on space decomposition into virtual planes to support 3D real time obstacle detection during the navigation mission of autonomous mobile robots. This system uses the richness and the strength of the vision while reducing the data load, and computational cost by encoding coarsely the working space using limited number of spatially interrelated 2D laser spots. The presence of a target within the projection view of the sensor disturbs the projected laser spots' pattern. A disturbed spot gives information about the depth and position of the target part at that point. Efficient approach has been developed, to decompose spatially the space along the detectable depth in front of the sensor into a number of parallel virtual planes that are perpendicular to the line of robot's trajectory. The footprint size of the vision increases with the depth, and so the size of the virtual planes. To facilitate real time detection and tracking of dynamic objects/obstacles, each virtual plane is divided into five zones and the laser spots projected within each plane with its associated zones are used as a base for the tracking. The distance between the virtual planes represents the spatial decomposition of spot's movement space described by the number of pixels on the image plane in specific direction. The detection of a disturbed spot at a certain virtual plane indicates the availability of an obstacle or an object at the range of that plane with respect the robot. The longer a spot moves along its path the closer virtual plane to the robot is activated. Accordingly, the virtual plane closer to the robot has a higher priority in the detection and tracking process than the others do. This approach has the advantage of reducing significantly the computation time of 3D information that is required for real time detecting, and tracking objects in dynamic environment. The paper discusses and - illustrates the developed concept.

Published in:

Computational Intelligence in Robotics and Automation (CIRA), 2009 IEEE International Symposium on

Date of Conference:

15-18 Dec. 2009