By Topic

Effective image semantic annotation by discovering visual-concept associations from image-concept distribution model

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Ja-Hwung Su ; Dept. of Comput. Sci. & Inf. Eng., Nat. Cheng Kung Univ., Tainan, Taiwan ; Chien-Li Chou ; Ching-Yung Lin ; Tseng, V.S.

Up to the present, the contemporary studies are not really successful in image annotation due to some critical problems like diverse regularities between visual features and human concepts. Such diverse regularities make it hard to annotate the image semantics correctly. In this paper, we propose a novel approach called AICDM (Annotation by Image-Concept Distribution Model) for image annotation by discovering the associations between visual features and human concepts from image-concept distribution. Through the proposed image-concept distribution model, the uncertain regularities between visual features and human concepts can be clarified for achieving high-quality image annotation. The empirical evaluation results also reveal that our proposed AICDM method can effectively alleviate the uncertain regularity problem and bring out better annotation results than other existing approaches in terms of precision and recall.

Published in:

Multimedia and Expo (ICME), 2010 IEEE International Conference on

Date of Conference:

19-23 July 2010