Skip to Main Content
In this paper we propose an approach compliant with the MPEG-4 standard to synthesize and control facial expressions generated using 3D facial models. This is achieved by establishing the MPEG-4 facial animation standard conformity with the quadratic deformation model representations of facial expressions. This conformity allows us to utilize the MPEG-4 facial animation parameters (FAPs) with the quadratic deformation tables, as a higher layer, to compute the FAP values. The FAP values for an expression E are computed by performing a linear mapping between a set of transformed MPEG-4 FAP points (using quadratic deformation models) and the 3D facial model semantics. The nature of the quadratic deformation model representations of facial expressions can be employed to synthesize and control the six main expressions (smile, sad, fear, surprise, anger, and disgust). Using Whissel's psychological studies on emotions we compute an interpolation parameter that is used to synthesize intermediate facial expressions. The paper presents results of experimental studies performed using the Greta embodied conversational agent. The achieved results are promising and can lead to future research in synthesizing a wider range of facial expressions.