Skip to Main Content
In this paper we propose a gesture perception algorithm using compact one-dimensional representation of spatio-temporal motion-field patches. At the learning stage, motion-field patches are randomly extracted and stored as templates. When generating feature vectors for video sequences, we compare stored templates with video, calculate maximum similarities and save those values as elements of feature vectors. In order to reduce the complexity of patch calculation, we project the spatio-temporal motion data in patches both in space and time spans. Preliminary gesture perception experiments were conducted and promising results are obtained despite its simplified procedures.