Skip to Main Content
We present an effective real-time approach for automatically reconstructing 3D human body poses from monocular video sequences. In this approach, human body is automatically detected from video sequence, then image features such as silhouette, edge and color are extracted and integrated to infer 3D human poses in an iterative way by minimizing the cost function defined between 2D features from the projected 3D model and image sequence. After convergence, the reconstruction result is evaluated for detecting tracking failure, which can be quickly recovered by adjusting initial pose to restart the minimization procedure. The results show the efficiency and robustness of the proposed approach.