By Topic

Spatially-Constrained Similarity Measurefor Large-Scale Object Retrieval

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Xiaohui Shen ; Adobe Research, 345 Park Ave, San Jose ; Zhe Lin ; Jonathan Brandt ; Ying Wu

One fundamental problem in object retrieval with the bag-of-words model is its lack of spatial information. Although various approaches are proposed to incorporate spatial constraints into the model, most of them are either too strict or too loose so that they are only effective in limited cases. In this paper, a new spatially-constrained similarity measure (SCSM) is proposed to handle object rotation, scaling, view point change and appearance deformation. The similarity measure can be efficiently calculated by a voting-based method using inverted files. During the retrieval process, object localization in the database images can also be simultaneously achieved using SCSM without post-processing. Furthermore, based on the retrieval and localization results of SCSM, we introduce a novel and robust re-ranking method with the k-nearest neighbors of the query for automatically refining the initial search results. Extensive performance evaluations on six public data sets show that SCSM significantly outperforms other spatial models including RANSAC-based spatial verification, while k-NN re-ranking outperforms most state-of-the-art approaches using query expansion. We also adapted SCSM for mobile product image search with an iterative algorithm to simultaneously extract the product instance from the mobile query image, identify the instance, and retrieve visually similar product images. Experiments on two product image search data sets show that our approach can robustly localize and extract the product in the query image, and hence drastically improve the retrieval accuracy over baseline methods.

Published in:

IEEE Transactions on Pattern Analysis and Machine Intelligence  (Volume:36 ,  Issue: 6 )