Abstract:
Vivid talking face generation has potential applications in virtual reality. Existing methods can generate talking faces that are synchronized with the audio, but typical...Show MoreMetadata
Abstract:
Vivid talking face generation has potential applications in virtual reality. Existing methods can generate talking faces that are synchronized with the audio, but typically ignore the accurate expression of emotions. In this paper, we propose an advanced two-step framework to synthesize talking face videos with vivid emotional appearances. The first step is designed to generate emotional fine-grained landmarks, including the normalized landmarks, gaze, and head pose. In the second step, we map the facial landmarks to latent key points, which are then fed into the pre-trained model to generate high-quality face images. Extensive experiments demonstrate the effectiveness of our method.
Published in: 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
Date of Conference: 16-21 March 2024
Date Added to IEEE Xplore: 29 May 2024
ISBN Information: