Skip to Main Content
The animation industry has a challenge to get facial expressions animation that resembles the original facial expressions. One of the supports from the technology is the presence of facial motion capture technology. Unfortunately, this technology requires no small amount of funds and a long preparation time. The process that takes a lot of preparation time and money is the process of setting markers on the actor's face. In addition, the marker adds inconvenience factor on the actor's face. In this paper, the research and development of a software system to be used in the animation industry will be developed, in the form of a marker-free facial motion capture. The main problems in this research are the methods of acquiring facial feature movement data from the raw face images. In this paper, there are two major processes: facial feature extraction and feature point projection. The facial feature extraction process uses Viola-Jones for the face detection and Active Shape Models (ASM) for the extraction. ASM was chosen because it runs significantly faster and locates the points more accurately than the other model. The record of facial feature movement data would create the animation of facial expression in 3-dimensional face model. The expected result of this research is a system that is able to generate a facial expressions animation on 3-dimensional face model from the real-time image sequences of real human face without any physical marker help. Minimum accuracy obtained from the testing is 85.2632%. Accuracy minimum occurs in the mouth area in test scenarios sad facial expressions. Although the system can work well, but the result of this system still has low accuracy in the mouth area. By comparing the accuracy of the experiment with a different amount of training data, the system with using more training data has higher accuracy. Therefore, in order to work with more accurate, the process of face detection and facial feature extraction requires more training f- - ace data that are many and varied. For the future, the system can be developed with more general training sets to increase the accuracy.