Skip to Main Content
A sign language is a language which uses visually transmitted sign patterns, instead of acoustically conveyed sound patterns, to deliver the meaning. Sign languages are typically constructed by simultaneous combination of hand shapes, orientations and movements of the hands, arms or body, with facial expressions to fluidly express a speaker's thoughts. This paper presents a less costly approach to develop a computer vision based sign language recognition application in real time context with motion recognition. We explore new concepts of breaking down motion gestures to sub components for parallel processing and mapping motion data into static data representations. This concept can be used to identify sign language gestures, without performing computational intensive tasks of each and every frame captured. Moreover, sign language gestures can be evaluated with minimal image processing and map the motion to linear/non-linear equations using functionalities proposed in this paper.