By Topic

Integration of vision modules and labeling of surface discontinuities

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Gamble, E.B. ; Artificial Intelligence Lab., MIT, Cambridge, MA ; Geiger, D. ; Poggio, T. ; Weinshall, D.

It is assumed that a major goal of the early vision modules and their integration is to deliver a cartoon of the discontinuities in the scene and to label them in terms of their physical origin. The output of each of the vision modules is noisy, possibly sparse, and sometimes not unique. The authors suggest the use of a coupled Markov random field (MRF) at the output of each module (image cues)-stereo, motion, color, and texture-to achieve two goals: first, to counteract the noise and fill in sparse data, and secondly, to integrate the image within each MRF to find the module discontinuities and align them with the intensity edges. The authors outline a theory of how to label the discontinuities in terms of depth, orientation, albedo, illumination, and specular discontinuities. They present labeling results using a simple linear classifier operating on the output of the MRF associated with each vision module and coupled to the image data. The classifier has been trained on a small set of a mixture of synthetic and real data

Published in:

Systems, Man and Cybernetics, IEEE Transactions on  (Volume:19 ,  Issue: 6 )