By Topic

Tracking of humans and estimation of body/head orientation from top-view single camera for visual focus of attention analysis

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Ozturk, O. ; Fac. of Eng., Univ. of Tokyo, Tokyo, Japan ; Yamasaki, T. ; Aizawa, K.

This paper addresses the problem of determining a person's body and head orientations while tracking the person in an indoor environment monitored by a single top-view camera. The challenging part of this problem lies in the wide range of human postures depending on the position of the camera and articulations of the pose. In this work, a two-level cascaded particle filter approach is introduced to track humans. Color clues are used as the first level for each iteration and edge-orientation histograms are reutilized to support the tracking at the second level. To determine body and head orientations, a combination of Shape Context and SIFT features is proposed. Body orientation is calculated by matching the upper region of the body with predefined shape templates, then finding the orientation within the ranges of ¿/8 degrees. Then, the optical flow vectors of SIFT features around the head region are calculated to evaluate the direction and type of the motion of the body and head. We demonstrate preliminary results of our approach showing that body and head orientations are successfully estimated. A discussion on various motion patterns and future improvements for more complicated situations is also given.

Published in:

Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on

Date of Conference:

Sept. 27 2009-Oct. 4 2009