By Topic

Effective Semantic Annotation by Image-to-Concept Distribution Model

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Ja-Hwung Su ; Department of Computer Science and Information Engineering, National Cheng Kung University ; Chien-Li Chou ; Ching-Yung Lin ; Vincent S. Tseng

Image annotation based on visual features has been a difficult problem due to the diverse associations that exist between visual features and human concepts. In this paper, we propose a novel approach called Annotation by Image-to-Concept Distribution Model (AICDM) for image annotation by discovering the associations between visual features and human concepts from image-to-concept distribution. Through the proposed image-to-concept distribution model, visual features and concepts can be bridged to achieve high-quality image annotation. In this paper, we propose to use “visual features”, “models”, and “visual genes” which represent analogous functions to the biological chromosome, DNA, and gene. Based on the proposed models using entropy, tf-idf, rules, and SVM, the goal of high-quality image annotation can be achieved effectively. Our empirical evaluation results reveal that the AICDM method can effectively alleviate the problem of visual-to-concept diversity and achieve better annotation results than many existing state-of-the-art approaches in terms of precision and recall.

Published in:

IEEE Transactions on Multimedia  (Volume:13 ,  Issue: 3 )