Skip to Main Content
Automatic image annotation is an important and promising solution to narrow the semantic gap between low-level visual feature and high-level semantic concept. Here we propose an improved relevance model to solve image annotation problem. Unlike the classical approaches including classification, and translation model, the improved model is capable of discovering the correlation between blobs (segmented regions) and textual keywords so as to automatically generate keywords for un-annotated image according to joint probabilities. Moreover, it has the ability to detect and remove false keyword(s) by considering the co-occurrence of keywords through machine learning. Experiments demonstrate that the proposed approach outperforms the previous algorithms for image annotation.