Cart (Loading....) | Create Account
Close category search window
 

Facial expression control of 3-dimensional face model using facial feature extraction

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Sumarsono, A.R. ; Inf. Dept., Bandung Inst. of Technol., Bandung, Indonesia ; Suwardi, I.S.

The animation industry has a challenge to get facial expressions animation that resembles the original facial expressions. One of the supports from the technology is the presence of facial motion capture technology. Unfortunately, this technology requires no small amount of funds and a long preparation time. The process that takes a lot of preparation time and money is the process of setting markers on the actor's face. In addition, the marker adds inconvenience factor on the actor's face. In this paper, the research and development of a software system to be used in the animation industry will be developed, in the form of a marker-free facial motion capture. The main problems in this research are the methods of acquiring facial feature movement data from the raw face images. In this paper, there are two major processes: facial feature extraction and feature point projection. The facial feature extraction process uses Viola-Jones for the face detection and Active Shape Models (ASM) for the extraction. ASM was chosen because it runs significantly faster and locates the points more accurately than the other model. The record of facial feature movement data would create the animation of facial expression in 3-dimensional face model. The expected result of this research is a system that is able to generate a facial expressions animation on 3-dimensional face model from the real-time image sequences of real human face without any physical marker help. Minimum accuracy obtained from the testing is 85.2632%. Accuracy minimum occurs in the mouth area in test scenarios sad facial expressions. Although the system can work well, but the result of this system still has low accuracy in the mouth area. By comparing the accuracy of the experiment with a different amount of training data, the system with using more training data has higher accuracy. Therefore, in order to work with more accurate, the process of face detection and facial feature extraction requires more training f- - ace data that are many and varied. For the future, the system can be developed with more general training sets to increase the accuracy.

Published in:

Electrical Engineering and Informatics (ICEEI), 2011 International Conference on

Date of Conference:

17-19 July 2011

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.