By Topic

Fusion of intensity and range data

Sign In

Full text access may be available.

To access full text, please use your member or institutional sign in.

Formats Non-Member Member
$31 $31
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Wallace, A. ; Dept. of Comput. & Electr. Eng., Heriot-Watt Univ., Edinburgh, UK ; Guanghua Zhang ; Austin, B.

Recent work to improve the robustness of computer vision has included investigation of sensor fusion. The authors introduce a visual architecture, which has several parallel processes operating in a reconfigurable, concurrent architecture. It consists of a conventional, intensity based image interpretation system and the corresponding depth channel. Each route may be implemented as a cascade of parallel processes, each of which has been implemented on a processor farm. However, the architecture exhibits the potential for fusion at the pixel, primitive and matching levels. In order to control the several processes and to determine at which, if any, level fusion should occur, it is necessary to include a control process or processes, with explicit goals of object identification and location, as a pre-process for manipulation or inspection. They concentrate on studies of the three levels of fusion of depth and intensity data of a scene acquired from a single viewpoint

Published in:

3D Imaging and Analysis of Depth/Range Images, IEE Colloquium on

Date of Conference:

1 Mar 1994