By Topic

Data-Free Prior Model for Upper Body Pose Estimation and Tracking

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Jixu Chen ; Comput. Vision Lab., GE Global Res. Center, Niskayuna, NY, USA ; Siqi Nie ; Qiang Ji

Video based human body pose estimation seeks to estimate the human body pose from an image or a video sequence, which captures a person exhibiting some activities. To handle noise and occlusion, a pose prior model is often constructed and is subsequently combined with the pose estimated from the image data to achieve a more robust body pose tracking. Various body prior models have been proposed. Most of them are data-driven, typically learned from 3D motion capture data. In addition to being expensive and time-consuming to collect, these data-based prior models cannot generalize well to activities and subjects not present in the motion capture data. To alleviate this problem, we propose to learn the prior model from anatomic, biomechanics, and physical constraints, rather than from the motion capture data. For this, we propose methods that can effectively capture different types of constraints and systematically encode them into the prior model. Experiments on benchmark data sets show the proposed prior model, compared with data-based prior models, achieves comparable performance for body motions that are present in the training data. It, however, significantly outperforms the data-based prior models in generalization to different body motions and to different subjects.

Published in:

Image Processing, IEEE Transactions on  (Volume:22 ,  Issue: 12 )