By Topic

3D object modeling and segmentation using edge points with SIFT descriptors

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Tomono, M. ; Chiba Inst. of Technol., Narashino

Our goal is that the robot learns specific objects (not object category) from images. The major problem here is how to separate the target object from the background. We create a scene model from an image sequence. The scene model contains both the target object and background. We separate the target object from the background by matching the scene model and training images having different backgrounds. A scene model consists of a 3D model and 2D models. We utilize edge points to represent detailed object shape. The 3D model is composed of the 3D points reconstructed from image edge points using structure-from-motion technique. A 2D model consists of an image in the input image sequence, edge points in the image, and the camera pose from which the image was taken. Each edge point has a SIFT descriptor for edge-point matching. The scale space analysis is done to obtain scale-invariant edge points.

Published in:

Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on

Date of Conference:

22-26 Sept. 2008