Loading [a11y]/accessibility-menu.js
Machine learning model for sign language interpretation using webcam images | IEEE Conference Publication | IEEE Xplore

Machine learning model for sign language interpretation using webcam images


Abstract:

Human beings interact with each other either using a natural language channel such as words, writing, or by body language (gestures) e.g. hand gestures, head gestures, fa...Show More

Abstract:

Human beings interact with each other either using a natural language channel such as words, writing, or by body language (gestures) e.g. hand gestures, head gestures, facial expression, lip motion and so on. As understanding natural language is important, understanding sign language is also very important. The sign language is the basic communication method within hearing disable people. People with hearing disabilities face problems in communicating with other hearing people without a translator. For this reason, the implementation of a system that recognize the sign language would have a significant benefit impact on deaf people social live. In this paper, we have proposed a marker-free, visual Indian Sign Language recognition system using image processing, computer vision and neural network methodologies, to identify the characteristics of the hand in images taken from a video trough web camera. This approach will convert video of daily frequently used full sentences gesture into a text and then convert it into audio. Identification of hand shape from continuous frames will be done by using series of image processing operations. Interpretation of signs and corresponding meaning will be identified by using Haar Cascade Classifier. Finally displayed text will be converted into speech using speech synthesizer.
Date of Conference: 04-05 April 2014
Date Added to IEEE Xplore: 19 June 2014
Electronic ISBN:978-1-4799-2494-3
Conference Location: Mumbai, India

Contact IEEE to Subscribe

References

References is not available for this document.