Skip to Main Content
This paper proposes a novel framework for universal facial expression recognition. The framework is based on two sets of features extracted from the face image: entropy and brightness. First, saliency maps are obtained by state-of-the-art saliency detection algorithm i.e. “frequency-tuned salient region detection”. Then only localized salient facial regions from saliency maps are processed to extract entropy and brightness features. To validate the performance of saliency detection algorithm against human visual system, we have performed a visual experiment. Eye movements of 15 subjects were recorded with an eye-tracker in free viewing conditions as they watch a collection of 54 videos selected from Cohn-Kanade facial expression database. Results of the visual experiment provided the evidence that obtained saliency maps conforms well with human fixations data. Finally, evidence of the proposed framework's performance is exhibited through satisfactory classification results on Cohn-Kanade database.