Skip to Main Content
Automatic image annotation (AIA) refers to the association of words to whole images which is considered as a promising and effective approach to bridge the semantic gap between low-level visual features and high-level semantic concepts. In this paper, we formulate the task of image annotation as a multi-label multi class semantic image classification problem and propose a simple yet effective algorithm: hybrid self-learning with alternating space between uni-modality and bi-modality, which integrate multi-label boosting with asymmetric binary SVM-based active learning into a joint hierarchical classification framework to perform cross-modal image annotation by incorporating unlabeled images. We conducted experiments on a medium-sized image collection including about 15000 images from Corel Stock Photo CDs. The experimental results demonstrated that our proposed method can enhance a given annotation model by using unlabeled images to some extent, showing the effectiveness of the proposed algorithm and the feasibility of unlabeled data to help the annotation accuracy.