Abstract:
This letter introduces a novel multi-modal environmental translator for real-time emotion recognition. The system integrates facial expression recognition (FER) and speec...Show MoreMetadata
Abstract:
This letter introduces a novel multi-modal environmental translator for real-time emotion recognition. The system integrates facial expression recognition (FER) and speech emotion recognition (SER) to analyze visual and vocal cues while conveying emotional feedback through vibrotactile signals. Emotions are mapped to distinct vibration frequencies—ranging from 0.4 Hz for neutral to 35 Hz for anger—enabling users to identify seven core emotions through tactile sensations intuitively. A user study involving ten participants demonstrated an average adaptation time of fewer than 7 min, indicating the system's effectiveness in quickly familiarizing users with the vibration signals. Overall, this innovative solution provides a robust approach to enhancing real-time emotion recognition through haptic feedback, making it suitable for everyday social interactions.
Published in: IEEE Sensors Letters ( Volume: 9, Issue: 3, March 2025)
References is not available for this document.
References is not available for this document.