Skip to Main Content
With the recent advancement of web search ranking framework, a.k.a. learning to rank, it is questionable whether it can be still applicable to the large-scale content based image retrieval settings. Moreover, given the complex structure of image representation, it is also challenging how to design visual ranking features that not only scale up well, but also model various visual modalities and the spatial distributions of local features. In this paper, we answer the above two questions by investigating the performance of learning to rank for the large-scale content based image retrieval problem, with some scalable visual based ranking features proposed to improve the performance. Specifically, we firstly adopt several well performed ad-hoc ranking models to generate the Bag-of-Visual-Words based ranking features. Additionally, to preserve the spatial information of image local descriptors, we split images into blocks from coarse to fine, and extract ranking features hierarchically with a spatial pyramid manner. Finally, image global features are also quantized via LSH and concatenated with the existing ranking features all together. Experimental results on both Oxford and Image Net databases demonstrate the effectiveness and efficiency of the proposed ranking model, as well as the complementarity of each ranking features.