Skip to Main Content
We present a new framework which tries to improve the effectiveness of CBIR by integrating semantic concepts extracted from text. Our model is inspired from the VSM model developed in information retrieval. We represent each image in our collection with a vector of probabilities linking it to the different keywords. In addition to the semantic content of images, these probabilities capture the user's preference in each step of relevance feedback. The obtained features are then combined with visual ones in retrieval phase. Evaluation carried out on more than 10,000 images shows that this considerably improves retrieval effectiveness.