Skip to Main Content
Recognition of facial expressions becomes significant issue between human and robot interactions. The purpose of this paper is to study the alignment and tracking of facial features with optical flow and component-based active appearance model, and then analyze fitted points to recognize facial expressions. Using this method with accurate analysis and tracking of facial features in robot or computer. Consequently robot or computer can easily recognize user's facial expressions and emotional variation, and then response properly. We apply some realtime techniques and Active Appearance Model (AAM) on the cameras. A high-quality AAM alignment results depend on apposite selections of initial positions. Nevertheless it takes a lot of time when we apply image pyramid to get precise results. In this paper, we introduce a new method to apply AAM fitting and further solve above problems. In our fitting plan, we apply partial AAM fitting separately on mouth and eyes. Therefore we could make more efficient facial features alignment and then it becomes able to implement tracking to real-world video and realtime alignment. To get more stable partial AAM, we use multi-level optical flow to determine initial positions of facial feature models. It is relative easier to analyze user's emotional information and get accurate positions of facial features for further application in real world environments by the algorithm we developed.