By Topic

Automatic Image Annotation Based on Visual Cognitive Theory

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
Yusuke Kamoi ; Department of Computer Science, Meiji University, 1-1-1, Higashi Mita, Tama-ku, Kawasaki-shi, Kanagawa, 214-8571, JAPAN, ; Yosuke Furukawa ; Tatsuya Sato ; Yuya Kiwada
more authors

This paper presents a new method of automatic image annotation based on visual cognitive theory that improves the accuracy of image recognition by taking two semantic levels of keywords that give feedback to each other into consideration. Our system first segments an image and recognizes objects in the K-Nearest Neighbor (KNN). It then recognizes contexts by using them from networked knowledge. After that, it re-recognizes objects depending on these contexts. We adopted natural images for experiments and verified the system's effectiveness. As a result, we obtained improved recognition rates compared with KNN. We proved that our system that takes the semantic levels of keywords into account has great potential for enhancing image recognition.

Published in:

NAFIPS 2007 - 2007 Annual Meeting of the North American Fuzzy Information Processing Society

Date of Conference:

24-27 June 2007