Abstract:
True incapacity is the inability to speak. A person who has a speech impediment is unable to communicate with others through speech and hearing. Individuals use sign lang...Show MoreMetadata
Abstract:
True incapacity is the inability to speak. A person who has a speech impediment is unable to communicate with others through speech and hearing. Individuals use sign language as a form of communication to overcome this disability. Even though signing has become commonplace in recent years, it can still be difficult for non-signers to communicate with signers. The flow of information and emotions in a person’s life has become increasingly dependent on communication over time. The only way for that person with special needs to communicate with the rest of the world is through sign language, which uses entirely distinct hand motions. With the most recent developments in computer vision and deep learning techniques, there has been significant improvement in the disciplines of motion and gesture identification. For American Sign Language (ASL), sign language recognition has been a well-researched subject. Nevertheless, there aren’t much published analysis works on Indian Sign Language (ISL). The intended method will recognise 4972 static hand signs for the twenty-four different English alphabets (A, B, C, D, E, F, G, H, I, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y). The main goal of our effort is to create a deep learning-based application that uses the "Google text to speech" API to translate sign language into text, facilitating communication between signers and non-signers. We made use of the Kaggle-available dataset. Proposed method using custom Convolutional Neural Network and got the accuracy of 99%.
Date of Conference: 16-17 October 2022
Date Added to IEEE Xplore: 13 December 2022
ISBN Information: