Skip to Main Content
This paper presents a gesture recognition framework using voxel data obtained through visual hull reconstruction from multiple cameras. View-invariant pose descriptors are extracted by projecting voxel data onto a low dimensional pose coefficient space using multilinear analysis. Gestures are then treated as sequences of pose descriptors and represented by hidden Markov models for gesture recognition. Promising results have been obtained using a public data set containing 11 single-person gestures and another data set including seven two-people cooperative dance gestures.