Scheduled System Maintenance:
On May 6th, system maintenance will take place from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). During this time, there may be intermittent impact on performance. We apologize for the inconvenience.
By Topic

Emotional Intensity-based Facial Expression Cloning for Low Polygonal Applications

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
2 Author(s)
Seongah Chin ; Div. of Multimedia, Sungkyul Univ., Anyang ; Kyoung-Yun Kim

People instinctively recognize facial expression as a key to nonverbal communication, which has been confirmed by many different research projects. A change in intensity or magnitude of even one specific facial expression can cause different interpretations. A systematic method for generating facial expression syntheses, while mimicking realistic facial expressions and intensities, is a strong need in various applications. Although manually produced animation is typically of high quality, the process is slow and costly-therefore, often unrealistic for low polygonal applications. In this paper, we present a simple and efficient emotional-intensity-based expression cloning process for low-polygonal-based applications, by generating a customized face, as well as by cloning facial expressions. We define intensity mappings to measure expression intensity. Once a source expression is determined by a set of suitable parameter values in a customized 3D face and its embedded muscles, expressions for any target face(s) can be easily cloned by using the same set of parameters. Through experimental study, including facial expression simulation and cloning with intensity mapping, our research reconfirms traditional psychological findings. Additionally, we discuss the method's overall usability and how it allows us to automatically adjust a customized face with embedded facial muscles while mimicking the user's facial configuration, expression, and intensity.

Published in:

Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on  (Volume:39 ,  Issue: 3 )