Skip to Main Content
We propose a novel affine-invariant modeling of handshape-appearance images, which offers a compact and descriptive representation of the hand configurations. Our approach combines: (1) A hybrid representation of both shape and appearance of the hand that models the handshapes without any landmark points. (2) Modeling of the shape-appearance images with a linear combination of variation images that is followed by an affine transformation, which accounts for modest pose variation. (3) Finally, an optimization based fitting process that results on the estimated variation image coefficients that are further employed as features. The proposed modeling is applied on handshapes from Sign Language video data after segmentation and tracking. It is evaluated on extensive experiments of handshape classification, which investigate the effect of the involved parameters and moreover provide a variety of comparisons to base-line approaches found in the literature. The results of at least 10.5% absolute improvement indicate the effectiveness of our approach in the handshape classification problem.
Date of Conference: 26-29 Sept. 2010