By Topic

Combining spatial and temporal priors for articulated human tracking with online learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Cheng Chen ; School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, Oklahoma, USA ; Guoliang Fan

We study articulated human tracking by combining spatial and temporal priors in an integrated online learning and inference framework, where body parts can be localized and segmented simultaneously. The temporal prior is represented by the motion trajectory in a low dimensional latent space learned from tracking history, and it predicts the configuration of each body part for the next frame. The spatial prior is encoded by a star-structured graphical model and embedded in the temporal prior, and it can be constructed ¿on-the-fly¿ from the predicted pose and used to evaluate and correct the prediction by assembling part detection results. Both temporal and spatial priors can be online learned incrementally through the Back Constrained-Gaussian Process Latent Variable Model (BC-GPLVM) that involves a temporal sliding window for online learning. Experiments show that the proposed algorithm can achieve accurate and robust tracking results for different walking subjects with significant appearance and motion variability.

Published in:

Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on

Date of Conference:

Sept. 27 2009-Oct. 4 2009