By Topic

Reading Users' Minds From Their Eyes: A Method for Implicit Image Annotation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Hajimirza, S.N. ; Multimedia & Vision Group, Queen Mary Univ. of London, London, UK ; Proulx, M.J. ; Izquierdo, E.

This paper explores the possible solutions for image annotation and retrieval by implicitly monitoring user attention via eye-tracking. Features are extracted from the gaze trajectory of users examining sets of images to provide implicit information on the target template that guides visual attention. Our Gaze Inference System (GIS) is a fuzzy logic based framework that analyzes the gaze-movement features to assign a user interest level (UIL) from 0 to 1 to every image that appeared on the screen. Because some properties of the gaze features are unique for every user, our user adaptive framework builds a new processing system for every new user to achieve higher accuracy. The generated UILs can be used for image annotation purposes; however, the output of our system is not limited as it can be used also for retrieval or other scenarios. The developed framework produces promising and reliable UILs where approximately 53% of target images in the users' minds can be identified by the machine with an error of less than 20% and the top 10% of them with no error. We show in this paper that the existing information in gaze patterns can be employed to improve the machine's judgement of image content by assessment of human interest and attention to the objects inside virtual environments.

Published in:

Multimedia, IEEE Transactions on  (Volume:14 ,  Issue: 3 )