Skip to Main Content
Automatic image annotation is an important but highly challenging problem in content-based image retrieval. This paper introduces a new procedure for providing images with semantic keywords. To bridge the semantic gap, classified images are used to train a special multi-class classifier which maps the visual image feature into the model space. The model-vectors that construct the model space are more appropriate for the image content and are applied to each individual image. Soft labels are then given to the unannotated images during the propagation procedure, and as a keyword, each label is associated with a membership confidence in probability. Thus conceptualized annotation of images could be provided to users. An empirical study of the COREL image database showed that the proposed model-vectors outperformed visual features by 14.0% in the F-measure for annotation.