By Topic

Robust Recognition and Pose Estimation of 3D Objects Based on Evidence Fusion in a Sequence of Images

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)
Sukhan Lee ; school of information and communication engineering, Sungkyunkwan University, Suwon, Korea. phone: 82-31-299-7150; fax: 82-31-290-6479; e-mail: ; Seongsoo Lee ; Jeihun Lee ; Dongju Moon
more authors

A sequence of images in multiple views rather than a single image from a single view is of great advantage for robust visual recognition and pose estimation of 3D objects in noisy and visually not-so-friendly environments (due to texture, occlusion, illumination, and camera pose). In this paper, we present a particle filter based probabilistic method for recognizing an object and estimating its pose based on a sequence of images, where the probability distribution of object pose in 3D space is represented by particles. The particles are updated by consecutive observations in a sequence of images and are converged to a single pose. The proposed method allows an easy integration of multiple evidences such photometric and geometric features as SIFT, color, 3D line, 2D square, etc. The integration of multiple evidences, including photometric and geometric features, in space and time makes the proposed method robust to various not-so-friendly visual environments. The experimental results with a single stereo camera show the validity of the proposed method in an environment containing both textured and texture-less objects.

Published in:

Proceedings 2007 IEEE International Conference on Robotics and Automation

Date of Conference:

10-14 April 2007