Abstract:
Sign language serves as a primary mode of communication for hearing-impaired individuals where one utilizes movements of hand, body gestures, and facial expressions. The ...Show MoreMetadata
Abstract:
Sign language serves as a primary mode of communication for hearing-impaired individuals where one utilizes movements of hand, body gestures, and facial expressions. The complexity of sign language encompasses various hand and finger articulations, often coordinated with head, face, and body movements. Despite nearly three decades of research, automatic sign language recognition remains an evolving field, presenting considerable challenges. Existing wearable, audio-based, and vision-based systems have some loopholes including privacy, lighting issues, noisy environment, and maintenance. To solve these issues, we proposed a contactless radar-based system for recognizing expressions through the analysis of head and hand movements, and facial expressions and translating them. Based on our information, this is the first contactless recognition system that recognises head and hand movements, and facial expressions simultaneously using radar and Deep Learning (DL) models. This study addresses the challenge of expression recognition by leveraging micro-Doppler signatures acquired through radar sensor technology. Our suggested approach extracts two-dimensional spatiotemporal features from radar data and employs state-of-the-art deep learning architectures for the classification of 16 expressions such as Ashamed, Cheerful, Enormal, Furious, GoodIdea, Guilty, Lonely, Normal, Ok, Playful, Proud, Sad, Shocked, Surprised, Thinking, and Worried. We collected a diverse dataset of 1,440 samples from human subjects aged 20 to 40. Four pre-trained DL models, GoogleNet, SqueezeNet, VGG16, and VGG19 were applied to this dataset after preprocessing to classify the expressions. Notably, VGG16 outperformed other models with 94.2% accurate results.
Published in: IEEE Sensors Journal ( Early Access )