Skip to Main Content
This paper presents a framework for building VideoPlace-like vision-driven user interface using ldquooptical flowldquo measurements and elastic labeled silhouette. The optical flow not only detects the movements but also gives us an estimate of the direction and the speed of the movement. The proposed representation is based on a self-organizing system designed to learn to recognize both the characteristic features of the image and their spatial relationship without needs of initializations or special settings. The positions of the units composing the system allow extracting information about the position and the dynamics of the observed figure. Reported results show how it is possible to identify the skeleton (legs and torso) of the walking subject using four units. It can be observed that the low-resolution skeleton formed by the four units correctly tracks the walking pattern of the two legs, while the upper segment remains centered on the subject body.