By Topic

Salient object detection based on spatiotemporal attention models

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)

In this paper we propose a method for automatic detection of salient objects in video streams. The movie is firstly segmented into shots based on a scale space filtering graph partition method. Next, we introduced a combined spatial and temporal video attention model. The proposed approach combines a region-based contrast saliency measure with a novel temporal attention model. The camera/background motion is determined using a set of homographic transforms, estimated by recursively applying the RANSAC algorithm on the SIFT interest point correspondence, while other types of movements are identified using agglomerative clustering and temporal region consistency. A decision is taken based on the combined spatial and temporal attention models. Finally, we demonstrate how the extracted saliency map can be used to create segmentation masks. The experimental results validate the proposed framework and demonstrate that our approach is effective for various types of videos, including noisy and low resolution data.

Published in:

Consumer Electronics (ICCE), 2013 IEEE International Conference on

Date of Conference:

11-14 Jan. 2013