By Topic

Effects of navigation design on Contextualized Video Interfaces

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Yi Wang ; Dept. of Computer Science and Center for Human-Computer Interaction, Virginia Tech, USA ; Doug A. Bowman

Real-time monitoring and responding to events observed across multiple surveillance cameras can pose an overwhelmingly high mental workload. Contextualized Video Interfaces (which place the surveillance videos within their spatial context) can be used to support these tasks. In order for users to integrate information from the videos and the spatial context as the events progress in real time, navigation interfaces are required. However, different tasks seem to favor different navigation techniques. In this paper, we describe the formal evaluation of four navigation designs for Contextualized Video Interfaces. The four designs arise from the consideration of two important factors of navigation techniques: navigation mode (manual or semi-automatic) and navigation context (overview or detailed view). To avoid a piecemeal understanding of the navigation techniques, we evaluated them using three tasks that have different information requirements. While semi-automatic navigation was generally preferable, low-DOF manual navigation techniques were found to be useful in certain situations. The choice between overview navigation and detailed-view navigation depends primarily on the user's information requirement in the task. Based on the findings, we provide guidelines on how to select designs according to task features.

Published in:

3D User Interfaces (3DUI), 2011 IEEE Symposium on

Date of Conference:

19-20 March 2011