Skip to Main Content
The existing methods for image search reranking suffer from the unreliability of the assumptions under which the initial text-based image search result is employed in the reranking process. In this paper, we propose a prototype-based reranking method to address this problem in a supervised, but scalable fashion. The typical assumption that the top-N images in the text-based search result are equally relevant is relaxed by linking the relevance of the images to their initial rank positions. Then, we employ a number of images from the initial search result as the prototypes that serve to visually represent the query and that are subsequently used to construct meta rerankers. By applying different meta rerankers to an image from the initial result, reranking scores are generated, which are then aggregated using a linear model to produce the final relevance score and the new rank position for an image in the reranked search result. Human supervision is introduced to learn the model weights offline, prior to the online reranking process. While model learning requires manual labeling of the results for a few queries, the resulting model is query independent and therefore applicable to any other query. The experimental results on a representative web image search dataset comprising 353 queries demonstrate that the proposed method outperforms the existing supervised and unsupervised reranking approaches. Moreover, it improves the performance over the text-based image search engine by more than 25.48%.