By Topic

Segmented descriptions of 3-D surfaces

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Fan, T.J. ; University of Southern California, Los Angeles, CA ; Medioni, G. ; Nevatia, R.

A method to segment and describe visible surfaces of three-dimensional (3-D) objects is presented by first segmenting the surfaces into simple surface patches and then using these patches and their boundaries to describe the 3-D surfaces. First, distinguished points are extracted which will comprise the edges of segmented surface patches, using the zero-crossings and extrema of curvature along a given direction. Two different methods are used: if the sensor provides relatively noise-free range images, the principal curvatures are computed at only one resolution, otherwise, a multiple scale approach is used and curvature is computed in four directions 45° apart to facilitate interscale tracking. These points are then grouped into curves and these curves are classified into different classes which correspond to significant physical properties such as jump boundaries, folds, and ridge lines (or smooth extrema). Then jump boundaries and folds are used to segment the surfaces into surface patches, and a simple surface is fitted to each patch to reconstruct the original objects. These descriptions not only make explicit most of the salient properties present in the original input, but are more suited to further processing, such as matching with a given model. The generality and robustness of this approach is illustrated on scene images with different available range sensors.

Published in:

Robotics and Automation, IEEE Journal of  (Volume:3 ,  Issue: 6 )