Skip to Main Content
This paper presents a learning based framework for content-based image retrieval to bridge the gap between low-level image features and high-level semantic information presented in the images on semantically organized collections. Both supervised (probabilistic multi-class support vector machine) and unsupervised (fuzzy c-means clustering) learning based techniques are investigated to associate global MPEG-7 based color and edge features with their high-level semantical and/or visual categories. It represents images in a successive semantic level of information abstraction based on confidence or membership scores obtained from the learning algorithms. A fusion-based similarity matching function is employed on these new image representations to rank and retrieve most similar images compared to a query image. Experimental results on a generic image database with manually assigned semantic categories and on a medical image database with different modalities and examined body parts demonstrate the effectiveness of the proposed approach compared to the commonly used Euclidean distance measure on MPEG-7 based descriptors.