By Topic

Learning-Based Prediction of Visual Attention for Video Signals

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Wen-Fu Lee ; Grad. Inst. of Commun. Eng., Nat. Taiwan Univ., Taipei, Taiwan ; Tai-Hsiang Huang ; Su-Ling Yeh ; Chen, H.H.

Visual attention, which is an important characteristic of human visual system, is a useful clue for image processing and compression applications in the real world. This paper proposes a computational scheme that adopts both low-level and high-level features to predict visual attention from video signal by machine learning. The adoption of low-level features (color, orientation, and motion) is based on the study of visual cells, and the adoption of the human face as a high-level feature is based on the study of media communications. We show that such a scheme is more robust than those using purely single low- or high-level features. Unlike conventional techniques, our scheme is able to learn the relationship between features and visual attention to avoid perceptual mismatch between the estimated salience and the actual human fixation. We also show that selecting the representative training samples according to the fixation distribution improves the efficacy of regressive training. Experimental results are shown to demonstrate the advantages of the proposed scheme.

Published in:

Image Processing, IEEE Transactions on  (Volume:20 ,  Issue: 11 )