Skip to Main Content
Expressions carry vital information in sign language. In this study, we have implemented a multi-resolution active shape model (MR-ASM) tracker, which tracks 116 facial landmarks on videos. Since the expressions involve significant amount of head rotation, we employ multiple ASM models to deal with different poses. The tracked landmark points are used to extract motion features which are used by a support vector machine (SVM) based classifier. We obtained above 90% classification accuracy in a data set containing 7 expressions.