Abstract:
Cross-domain clothing retrieval is a challenging task due to significant differences between online shop images taken in controlled conditions of clean backgrounds, good ...Show MoreMetadata
Abstract:
Cross-domain clothing retrieval is a challenging task due to significant differences between online shop images taken in controlled conditions of clean backgrounds, good lighting, and fixed poses, and street photos captured in uncontrollable conditions. In recent years, Convolutional Neural Networks (CNNs) have demonstrated its effectiveness for various computer vision problems including image retrieval. There are two mainstream CNNs based models addressing image retrieval tasks: triplet network models [1] and siamese network models [2]. In this paper, we first make a thorough comparison between the two types of models, and investigate the impact of different domain adaptation schemes including parameter sharing, non-sharing, and a new partial-sharing strategy between the street domain and the shop domain. Extensive experiments have revealed that the proposed partial-sharing scheme is able to reduce the number of parameters by a significant margin, while achieving comparable retrieval accuracy as the state-of-the-art scheme using triplet loss with non-sharing parameters.
Published in: 2016 Visual Communications and Image Processing (VCIP)
Date of Conference: 27-30 November 2016
Date Added to IEEE Xplore: 05 January 2017
ISBN Information: