By Topic

Relevance of a Feed-Forward Model of Visual Attention for Goal-Oriented and Free-Viewing Tasks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Le Meur, O. ; ESIR, Univ. of Rennes 1, Rennes, France ; Chevet, J.-C.

A purely bottom-up model of visual attention is proposed and compared to five state-of-the-art models. The role of the low-level visual features is examined in two contexts. Two datasets are used: one containing data coming from an eye tracking experiment obtained in a free-viewing task and a second containing 5000 hand-label pictures (observers had to enclose the most visually interesting objects in a rectangle). The relevance of the bottom-up models, i.e., the ability of a model to predict where the salient areas are located, is evaluated. Whatever the metrics and the datasets, the degree of similarity between predictions and ground truth is significantly above chance. The proposed model, resting on a small number of features, is shown to be a good predictor of the human visual fixations but also a good predictor of the objects chosen as interesting by observers. This study suggests that the low-level visual features have a significant role in a free-viewing task but also in a high-level visual task, such as the choice of the object of interest in a complex visual scene. Another outcome concerns the viewing duration used in eye tracking experiments. Results suggest that this parameter is finally not as critical as one would expect.

Published in:

Image Processing, IEEE Transactions on  (Volume:19 ,  Issue: 11 )