Skip to Main Content
This paper proposes a novel method for a hand-pose estimation that can be used for vision-based human interfaces. The aim of this method is to estimate all joint angles. In this method, the hand regions are extracted from multiple images obtained by a multiviewpoint camera system. By integrating these multiviewpoint silhouette images, a hand pose is reconstructed as a "voxel model." Then, all joint angles are estimated using a three-dimensional model fitting between the hand model and the voxel model. The following two experiments were performed: (1) an estimation of joint angles by the silhouette images from the hand-pose simulator and (2) hand-pose estimation using real hand images. The experimental results indicate the feasibility of the proposed algorithm for vision-based interfaces, although the algorithm requires faster implementation for real-time processing.