By Topic

Recognition of facial expressions using component-based Active Appearance Models for human-robot interactions

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Ren C. Luo ; Center for Intelligent Robotics and Automation Research, National Taiwan University, Taiwan ; Chun Y. Huang ; Chin C. Hsiao

Recognition of facial expressions becomes significant issue between human and robot interactions. The purpose of this paper is to study the alignment and tracking of facial features with optical flow and component-based active appearance model, and then analyze fitted points to recognize facial expressions. Using this method with accurate analysis and tracking of facial features in robot or computer. Consequently robot or computer can easily recognize user's facial expressions and emotional variation, and then response properly. We apply some realtime techniques and Active Appearance Model (AAM) on the cameras. A high-quality AAM alignment results depend on apposite selections of initial positions. Nevertheless it takes a lot of time when we apply image pyramid to get precise results. In this paper, we introduce a new method to apply AAM fitting and further solve above problems. In our fitting plan, we apply partial AAM fitting separately on mouth and eyes. Therefore we could make more efficient facial features alignment and then it becomes able to implement tracking to real-world video and realtime alignment. To get more stable partial AAM, we use multi-level optical flow to determine initial positions of facial feature models. It is relative easier to analyze user's emotional information and get accurate positions of facial features for further application in real world environments by the algorithm we developed.

Published in:

IECON 2011 - 37th Annual Conference on IEEE Industrial Electronics Society

Date of Conference:

7-10 Nov. 2011