Skip to Main Content
This paper presents an automatic approach to segment 3-D hand trajectories and transcribe phonemes based on them, as a step towards recognizing American sign language (ASL).We first apply a segmentation algorithm which detects minimal velocity and maximal change of directional angle to segment the hand motion trajectory of naturally signed sentences. This yields over-segmented trajectories, which are further processed by a trained naive Bayesian detector to identify true segmented points and eliminate false alarms. The above segmentation algorithm yielded 88.5% true segmented points and 11.8% false alarms on unseen ASL sentence samples. These segmentation results were refined by a simple majority voting scheme, and the final segments obtained were used to transcribe phonemes for ASL. This was based on clustering PCA-based features extracted from training sentences. We then trained hidden Markov models (HMMs) to recognize the sequence of phonemes in the sentences. On the 25 test sentences containing 157 segments, the average number of errors obtained was 15.6.