Skip to Main Content
Human vision can perceive body movements and actions effortlessly. In contrast, it is still a very challenging task for machines to have comparable performance. Many research results have shown that both visual attention and perceptual organization are crucial for visual perception tasks. In recent years, gesture recognition for HCI has drawn more attention because of high application demands. Based on visual perceptual theories and hypotheses, we propose a 3D gesture recognition framework in a coherent and biologically plausible manner. It mainly includes perceptual gesture feature extraction, hierarchical salience map construction and qualitative reasoning for gesture recognition.