By Topic

Active tracking and cloning of facial expressions using spatio-temporal information

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Lijun Yin ; Dept. of Comput. Sci., State Univ. of New York, Binghamton, NY, USA ; Basu, A. ; Yourst, M.T.

This paper presents a new method to analyze and synthesize facial expressions, in which a spatio-temporal gradient based method (i.e., optical flow) is exploited to estimate the movement of facial feature points. We proposed a method (called motion correlation) to improve the conventional block correlation method for obtaining motion vectors. The tracking of facial expressions under an active camera is addressed. With the motion vectors estimated, a facial expression can be cloned by adjusting the existing 3D facial model, or synthesized using different facial models. The experimental results demonstrate that the approach proposed is feasible for applications such as low bit rate video coding and face animation.

Published in:

Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings. 14th IEEE International Conference on

Date of Conference:

2002