Deep Learning Based Hand Gesture Recognition for Emergency Situation: A Study on Indian Sign Language | IEEE Conference Publication | IEEE Xplore

Deep Learning Based Hand Gesture Recognition for Emergency Situation: A Study on Indian Sign Language


Abstract:

Sign Language is used to convey feelings and thoughts, as well as to reinforce information given in everyday discussions. The goal of Sign Language recognition is to reco...Show More

Abstract:

Sign Language is used to convey feelings and thoughts, as well as to reinforce information given in everyday discussions. The goal of Sign Language recognition is to recognize and comprehend important human body gestures. Deep learning is a subset of machine learning that has lately gained traction in the recognition of sign languages. The current research focuses on how deep learning may be used to solve the challenge of identifying hand gestures in a collection of videos for emergency situations. To feed the model, a number of frames were taken from the videos. A pre-trained VGG-16 and a recurrent neural network with a large short-term memory make up the model (RNN-LSTM). The model achieved an accuracy of 98% on an Indian Sign Language Dataset of Hand Gestures for Emergency Situations. Deaf people can use sign language as a kind of emergency communication to help them deal with these circumstances. In this study, sign recognition could be utilised to address circumstances like pain, calling for help, or having to see a doctor.
Date of Conference: 25-26 October 2021
Date Added to IEEE Xplore: 29 December 2021
ISBN Information:
Conference Location: Sakheer, Bahrain
References is not available for this document.

I. Introduction

Human-Computer Interaction (HCI) is an interdisciplinary study field that focuses on the design of computing technologies and human-computer interaction. It has been continually improved in recent years and is now extensively utilised in a variety of fields. Sign Gesture Recognition is one of the most sophisticated areas where computer vision and artificial intelligence have helped improve communication with physically challenged individuals in emergency circumstances, such as pain, calling for help, and so on. For deaf-populations all around the world, sign language provides a definite way of communicating while preserving their unique grammatical patterns. It emphasises the conceptually predetermined movement of the hands, arms, head, and body in order to considerably build a gesture language. Because most sign languages are not mutually intelligible, people who do not sign the same language cannot communicate with one another. American Sign Language (ASL), British Sign Language (BSL), and French Sign Language are three of the most widely used sign languages in the world. The sign language used in India is known as Indian Sign Language (ISL) [1]. According to the 2011 census, India has 2.7 million individuals who are unable to communicate and 1.8 million people who are deaf.

Select All
1.
Tirthankar Dasgupta, Sambit Shukla, Sandeep Kumar, Synny Diwakar and Anupam Basu, "A Multilingual Multimedia Indian Sign Language Dictionary Tool", The 6’th Workshop on Asian Languae Resources, 2008.
2.
Y. Fang, K. Wang, J. Cheng and H. Lu, "A Real-Time Hand Gesture Recognition Method. In Proceedings of the Multimedia and Expo", IEEE International Conference On Multimedia and Expo, pp. 2-5, July 2007.
3.
M. Oudah, A. Al-Naji and J. Chahl, "Hand Gesture Recognition Based on Computer Vision: A Review of Techniques", J. Imaging, vol. 6, pp. 73, 2020.
4.
S.K. Arachchi, N.L. Hakim, H.-H. Hsu, S.V. Klimenko and T.K. Shih, "Real-time static and dynamic gesture recognition using mixed space features for 3D virtual world’s interactions", In Proceedings of the 2018 32nd International Conference on Advanced Information Networking and Applications Workshops (WAINA), pp. 16-18, May 2018.
5.
H. Cheng, L. Yang and Z. Liu, "Survey on 3D hand gesture recognition", IEEE Trans. Circuits Syst. Video Technol, vol. 26, pp. 1659-1673, 2015.
6.
O.K. Oyedotun and A. Khashman, "Deep learning in vision-based static hand gesture recognition", Neural Comput. Appl, vol. 28, pp. 3941-3951, 2017.
7.
P. Molchanov, S. Gupta, K. Kim and J. Kautz, "Hand gesture recognition with 3D convolutional neural networks", In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1-7, 7-12 June 2015.
8.
N. Do, S. Kim, H. Yang and G. Lee, "Robust hand shape features for dynamic hand gesture recognition using multi-level feature LSTM", Appl. Sci, 2020.
9.
V. John, A. Boyali, S. Mita, M. Imanishi and N. Sanma, "Deep learning based fast hand gesture recognition using representative frames", in International Conference on Digital Image Computing: Techniques and Applications (DICTA) 2016, pp. 1-8, 2016.
10.
K. Lai and S. N. Yanushkevich, "CNN+RNN depth and skeleton based dynamic hand gesture recognition", 24th International Conference on Pattern Recognition (ICPR), pp. 3451-3456, 2018, [online] Available: https://doi.org/10.1109/ICPR.2018.8545718.
11.
P. Molchanov, X. Yang, S. Gupta, K. Kim, S. Tyree and J. Kautz, "Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural networks", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 4207-4215, Jun. 2016.
12.
S. Masood, A. Srivastava, H Thuwal and M. Ahmad, "Real-time sign language gesture (word) recognition from video sequences using CNN and RNN", Intelligent Engineering Informatics. AISC, vol. 695, pp. 623-632, 2018, [online] Available: https://doi.org/10.1007/978-981-10-7566-7_63.
13.
V. Adithya and R. Rajesh, "Hand Gestures for Emergency Situations: A Video Dataset based on Words from Indian Sign Language", Data in Brief, 2020, [online] Available: https://doi.org/10.1016/j.dib.2020.106016.
14.
Asifullah Khan and Anabia Sohail, "Umme Zahoora and Aqsa Saeed Qureshi A survey of the recent architectures of deep convolutional neural networks", Artificial Intelligence Review, 2020.
15.
Simonyan Karen and Zisserman Andrew, "Very Deep Convolutional Networks for Large-Scale Image Recognition", arXiv 1409.1556, 2014.
16.
Hochreiter Sepp and Schmidhuber Jürgen, "Long Short-term Memory", Neural computation, vol. 9, pp. 1735-80, 1997.
17.
Szegedy Christian, Vanhoucke Vincent, Ioffe Sergey, Shlens Jon and ZB. Wojna, Rethinking the Inception Architecture for Computer Vision, 2016.
18.
S. Niu, Y. Liu, J. Wang and H. Song, "A decade survey of transfer learning (2010–2020)", IEEE Transactions on Artificial Intelligence, vol. 1, no. 2, pp. 151-166, 2020.
Contact IEEE to Subscribe

References

References is not available for this document.