Abstract:
This paper presents a solution capable of recognizing the facial expressions performed by a person's face and mapping them to a 3D face virtual model using the depth and ...Show MoreMetadata
Abstract:
This paper presents a solution capable of recognizing the facial expressions performed by a person's face and mapping them to a 3D face virtual model using the depth and RGB data captured from Microsoft's Kinect sensor. This solution starts by detecting the face and segmenting its regions, then, it identifies the actual expression using EigenFaces metrics on the RGB images and reconstructs the face from the filtered Depth data. A new dataset relative to 20 human subjects is introduced for learning purposes. It contains the images and point clouds for the different facial expressions performed. The algorithm seeks and displays automatically the seven state of the art expressions including surprise, fear, disgust, anger, joy, sadness and the neutral appearance. As result our system shows a morphing sequence between the sets of 3D face avatar models.
Date of Conference: 18-21 March 2013
Date Added to IEEE Xplore: 22 July 2013
ISBN Information: