Skip to Main Content
This article introduces the development of a fast 3D active vision system, and a new concept based on space decomposition into virtual planes to support 3D real time obstacle detection during the navigation mission of autonomous mobile robots. This system uses the richness and the strength of the vision while reducing the data load, and computational cost by encoding coarsely the working space using limited number of spatially interrelated 2D laser spots. The presence of a target within the projection view of the sensor disturbs the projected laser spots' pattern. A disturbed spot gives information about the depth and position of the target part at that point. Efficient approach has been developed, to decompose spatially the space along the detectable depth in front of the sensor into a number of parallel virtual planes that are perpendicular to the line of robot's trajectory. The footprint size of the vision increases with the depth, and so the size of the virtual planes. To facilitate real time detection and tracking of dynamic objects/obstacles, each virtual plane is divided into five zones and the laser spots projected within each plane with its associated zones are used as a base for the tracking. The distance between the virtual planes represents the spatial decomposition of spot's movement space described by the number of pixels on the image plane in specific direction. The detection of a disturbed spot at a certain virtual plane indicates the availability of an obstacle or an object at the range of that plane with respect the robot. The longer a spot moves along its path the closer virtual plane to the robot is activated. Accordingly, the virtual plane closer to the robot has a higher priority in the detection and tracking process than the others do. This approach has the advantage of reducing significantly the computation time of 3D information that is required for real time detecting, and tracking objects in dynamic environment. The paper discusses and - illustrates the developed concept.
Date of Conference: 15-18 Dec. 2009