By Topic

Integrating visual exploration and visual search in robotic visual attention: The role of human-robot interaction

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
2 Author(s)
Begum, M. ; Dept. of ECE, Univ. of Waterloo, Waterloo, ON, Canada ; Karray, F.

A common characteristics of the computational models of visual attention is they execute the two modes of visual attention (visual exploration and visual search) separately. This makes a visual attention model unsuitable for real-world robotic applications. This paper focuses on integrating visual exploration and visual search in a common framework of visual attention and the challenges resulting from such integration. It proposes a visual attention-oriented speech-based human robot interaction framework which helps a robot to switch back and-forth between the two modes of visual attention. A set of experiments are presented to demonstrate the performance of the proposed framework.

Published in:

Robotics and Automation (ICRA), 2011 IEEE International Conference on

Date of Conference:

9-13 May 2011