Skip to Main Content
This paper presents a general framework that can efficiently recover intrinsic hand configurations and viewpoints based on "multi-view and continuous motion" manifold learnt by LLP (Locality Preserving Projections) from monocular video. Firstly, 3D information of joint angels for a gesture with its 2D projecting silhouettes from multi-viewpoints is related via a 3D-2D mapping table offline. Then, a LPP-based filtering algorithm (LPP-FA) is presented which converts the multiple motion recognition and reconstruction problems to classification issue among embedding spaces, and proximity query and prediction process within embedding spaces. Finally, with an improved multiple gestures tracking method that combines skin color cues with oriented k-Dop(ODop), the proposed method achieves the accurate estimation of configurations and viewpoints of hands robustly and efficiently.