Skip to Main Content
This paper describes a generative Bayesian model designed to track an articulated 3D human skeleton in an image sequence. The model infers the subjects appearance, pose, and movement. This technique provides a novel method for implicity modelling depth and self occlusion, two issues that have been identified as drawbacks of existing models. We also employ a switching linear dynamical system to efficiently propose skeleton configurations. The model is verified using synthetic data. A video clip from the Caviar data set is used to demonstrate the potential of the methodology for tracking on real data.