By Topic

Personal Clothing Retrieval on Photo Collections by Color and Attributes

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Xianwang Wang ; Hewlett-Packard Labs., Hewlett-Packard Co., Palo Alto, CA, USA ; Tong Zhang ; Tretter, D.R. ; Qian Lin

Automatic personal clothing retrieval on photo collections, i.e., searching the same clothes worn by the same person, is not a trivial problem as photos are usually taken under completely uncontrolled realistic imaging conditions. Typically, the captured clothing images have large variations due to geometric deformation, occlusion, cluttered background, and photometric variability from illumination and viewpoint, which pose significant challenges to text-based or reranking-based visual search methods. In this paper, a novel framework is presented to tackle these issues by leveraging low-level features (e.g., color) and high-level features (attributes) of clothing. First, a content-based image retrieval (CBIR) approach based on the bag-of-visual-words (BOW) model is developed as our baseline system, in which a codebook is constructed from extracted dominant color patches. A reranking approach is then proposed to improve search quality by exploiting clothing attributes, including the type of clothing, sleeves, patterns, etc. Compared to low-level features, the attributes have better robustness to clothing variations, and carry semantic meanings as high-level image representations. Different visual attribute detectors are learned from large amounts of training data to extract the corresponding attributes. The construction of codebook and building of attribute classifiers are conducted offline, which leads to fast online search performance. Extensive experiments on photo collections show that the reranking algorithm based on attribute learning significantly improves retrieval performance in combination with the proposed baseline. Even our color-based baseline alone outperforms the previous CBIR-based search approaches. The experiments also demonstrate that our approach is robust to large variations of images taken in unconstrained environment.

Published in:

Multimedia, IEEE Transactions on  (Volume:15 ,  Issue: 8 )