Skip to Main Content
This paper presents a new method of automatic image annotation based on visual cognitive theory that improves the accuracy of image recognition by taking two semantic levels of keywords that give feedback to each other into consideration. Our system first segments an image and recognizes objects in the K-Nearest Neighbor (KNN). It then recognizes contexts by using them from networked knowledge. After that, it re-recognizes objects depending on these contexts. We adopted natural images for experiments and verified the system's effectiveness. As a result, we obtained improved recognition rates compared with KNN. We proved that our system that takes the semantic levels of keywords into account has great potential for enhancing image recognition.