Skip to Main Content
This paper presents an enhanced solution for user-dependent recognition of isolated Arabic Sign language gestures using disparity images. The sequences of disparity images are used to segment out the body of the user from a non-stationary background in the video based gestures. The spatio-temporal features in the sequence of images are represented in two images by accumulating the prediction errors of consecutive segmented images according to the directionality of motion. Moreover, different accumulation weights are applied to distinguish the directionality of motion in each of the resultant images. The body of the user is segmented out in the disparity images by using K-means clustering to encapsulate the closest entity to the camera. The prediction errors accumulated in the two images are then transformed into the frequency domain using Discrete Cosine Transformation (DCT). Zonal coding is employed to form feature vectors from the DCT coefficients. Simple classification techniques such as Knn and linear discriminate functions are used identify the gestures based on the described procedure. Assessment of methodology is performed by collecting 50 repetitions of 23 gestures from four different users using Bumblebee XB3 camera. A classification rate of 96.8%, and an improvement of 62% are reported for the described methodology compared with accumulating the prediction errors without segmenting the body of the user.
Date of Conference: 20-22 April 2010