By Topic

Exploiting Semantic and Visual Context for Effective Video Annotation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Jian Yi ; Institute of Computer Science and Technology, Peking University, Beijing, China ; Yuxin Peng ; Jianguo Xiao

We propose a new method to refine the result of video annotation by exploiting the semantic and visual context of video. On one hand, semantic context mining is performed in a supervised way, using the manual concept labels of the training set. It is very useful for boosting video annotation performance, because semantic context is learned from labels given by people, indicating human intention. In this paper, we model the spatial and temporal context in video by using conditional random fields with different structures. Comparing with existing methods, our method could more accurately capture concept relationship in video and could more effectively improve the video annotation performance. On the other hand, visual context mining is performed in a semi-supervised way based on the visual similarities among video shots. It indicates the natural visual property of video, and could be considered as the compensation to semantic context, which generally could not be perfectly modeled. In this paper, we construct a graph based on the visual similarities among shots. Then a semi-supervised learning approach is adopt based on the graph to propagate probabilities of the reliable shots to others having similar visual features with them. Extensive experimental results on the widely used TRECVID datasets exhibit the effectiveness of our method for improving video annotation accuracy.

Published in:

IEEE Transactions on Multimedia  (Volume:15 ,  Issue: 6 )