Abstract:
The paper is all about the system and interface developed, that allows deaf mutes to make use of various voice automated virtual assistants with help of Sign Language. Ma...Show MoreMetadata
Abstract:
The paper is all about the system and interface developed, that allows deaf mutes to make use of various voice automated virtual assistants with help of Sign Language. Majority of Virtual Assistants work on basis of audio inputs and produces audio outputs which in turn makes it impossible to be used by people with hearing and speaking disabilities. The project makes various voice controlled virtual assistants respond to hand gestures and also produces results in form of text outputs. It makes use of concepts like Deep Learning, Convolutional Neural Network, Tensor Flow, Python Audio Modules. A webcam first captures the hand gestures, then Convolutional Neural Network interprets the images produced and produces rational languages. These languages are then mapped to pre-defined datasets using Deep learning. For this purpose, Neural Networks are linked with Tensor flow library. The designed system will then produce audio input for the Digital Assistant, using one of the Python text to speech module. The final audio output of the Digital Assistant will be converted into text format using one of the Python speech to text module which will be displayed on the viewing screen.
Published in: 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA)
Date of Conference: 15-17 July 2020
Date Added to IEEE Xplore: 01 September 2020
ISBN Information: