Abstract:
The learning aids for hearing and speech disabled people exist but the usage of these aids are limited. The proposed system would be a real time system wherein live sign ...Show MoreMetadata
Abstract:
The learning aids for hearing and speech disabled people exist but the usage of these aids are limited. The proposed system would be a real time system wherein live sign gestures would be processed using image processing. Then classifiers would be used to differentiate various signs and the translated output would be displaying text. Machine Learning algorithms will be used to train on the data set. The purpose of the system is to improve the existing system in this area in terms of response time and accuracy with the use of efficient algorithms, high quality data sets and better sensors. The existing systems have been able to recognize gestures with high latency as it uses only image processing. In our project we aim to develop a cognitive system which would be responsive and robust so as to be used in day to day applications by hearing and speech disabled people.
Published in: 2017 International Conference on Computation of Power, Energy Information and Commuincation (ICCPEIC)
Date of Conference: 22-23 March 2017
Date Added to IEEE Xplore: 15 February 2018
ISBN Information: