By Topic

Scale- and Affine-Invariant Fan Feature

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Chunhui Cui ; Dept. of Electron. Eng., Chinese Univ. of Hong Kong, Shatin, China ; King Ngi Ngan

Most existing feature detectors assume no surface discontinuity within the keypoints' support regions and, hence, have little chance to match the keypoints located on or near the surface boundaries. These keypoints, though not many, are salient and representative. In this paper, we show that they can be successfully matched by using the proposed scale- and affine-invariant Fan features. Specifically, the image neighborhood of a keypoint is depicted by multiple fan subregions, namely Fan features, to provide robustness to surface discontinuity and background change. These Fan features are made scale-invariant by using the automatic scale selection method based on the Fan Laplacian of Gaussian (FLOG). Affine invariance is further introduced to the Fan features based on the affine shape diagnosis of the mirror-predicted surface patch. The Fan features are then described by Fan-SIFT, which is an extension of the famous scale-invariant feature transform (SIFT) descriptor. Experimental results of quantitative comparisons show that the proposed Fan feature has good repeatability that is comparable to the state-of-the-art features for general structured scenes. Moreover, by using Fan features, we can successfully match image structures near surface discontinuities despite significant scale, viewpoint, and background changes. These structures are complementary to those found by the traditional methods and are especially useful for describing weakly textured scenes, which is demonstrated in our experiments on image matching and object rendering.

Published in:

Image Processing, IEEE Transactions on  (Volume:20 ,  Issue: 6 )