By Topic

Spatial Markov Kernels for Image Categorization and Annotation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Zhiwu Lu ; Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong ; Horace H. S. Ip

This paper presents a novel discriminative stochastic method for image categorization and annotation. We first divide the images into blocks on a regular grid and then generate visual keywords through quantizing the features of image blocks. The traditional Markov chain model is generalized to capture 2-D spatial dependence between visual keywords by defining the notion of “past” as what we have observed in a row-wise raster scan. The proposed spatial Markov chain model can be trained via maximum-likelihood estimation and then be used directly for image categorization. Since this is completely a generative method, we can further improve it through developing new discriminative learning. Hence, spatial dependence between visual keywords is incorporated into kernels in two different ways, for use with a support vector machine in a discriminative approach to the image categorization problem. Moreover, a kernel combination is used to handle rotation and multiscale issues. Experiments on several image databases demonstrate that our spatial Markov kernel method for image categorization can achieve promising results. When applied to image annotation, which can be considered as a multilabel image categorization process, our method also outperforms state-of-the-art techniques.

Published in:

IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)  (Volume:41 ,  Issue: 4 )