By Topic

Epipolar geometry estimation for non-static scenes by 4D tensor voting

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Wai-Shun Tong ; Hong Kong Univ. of Sci. & Technol., Kowloon, China ; Chi-Keung Tang ; Medioni, G.

In the presence of false matches and moving objects, image registration is challenging, as outlier rejection, matching and registration become interdependent. We present an efficient and robust method, 4D tensor voting to estimate epipolar geometries for non-static scenes, and identify matching points due to salient and independent motions. Unlike other optimization techniques, data communication in 4D tensor voting does not involve any iterative search. Thus, initialization, local optimum, convergence, and dimensionality of parameter space are not problematic. Like the 8D counterpart, the only assumption we make is the pinhole camera model. Two advancements are made in this work. First, we reduce the dimensionality, and the 4D joint image space is an isotropic and orthogonal one, validating the general assumptions of tensor voting. This improvement is evidenced by the facts that only two passes are needed, and that 4D tensor voting can tolerate an even larger noise/signal ratio (up to a ratio of five). Second, instead of discarding motion pixels as outliers, we successively extract the epipolar geometries contributed by the static background and by the matching points due to salient motions. Only two frames are needed, and no simplifying assumption (such as affine camera model or homographic model between images) is made. Our 4D algorithm consists of two stages: local continuity constraint propagation to remove outliers, and global consistency checking to localize a 4D topological point cone. Results on challenging datasets are presented.

Published in:

Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on  (Volume:1 )

Date of Conference: