By Topic

Personalized multi-view face animation with lifelike textures

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
2 Author(s)
Liu, Yanghua ; Key Laboratory on Pervasive Computing (Tsinghua University) of the Ministry of Education, Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China ; Xu, Guangyou

Realistic personalized face animation mainly depends on a picture-perfect appearance and natural head rotation. This paper describes a face model for generation of novel view facial textures with various realistic expressions and poses. The model is achieved from corpora of a talking person using machine learning techniques. In face modeling, the facial texture variation is expressed by a multi-view facial texture space model, with the facial shape variation represented by a compact 3-D point distribution model (PDM). The facial texture space and the shape space are connected by bridging 2-D mesh structures. Levenberg-Marquardt optimization is employed for fine model fitting. Animation trajectory is trained for smooth and continuous image sequences. The test results show that this approach can achieve a vivid talking face sequence in various views. Moreover, the animation complexity is significantly reduced by the vector representation.

Published in:

Tsinghua Science and Technology  (Volume:12 ,  Issue: 1 )