Facial Expression Recognition with Active Local Shape Pattern and Learned-Size Block Representations | IEEE Journals & Magazine | IEEE Xplore

Facial Expression Recognition with Active Local Shape Pattern and Learned-Size Block Representations


Abstract:

Facial expression recognition has been studied broadly, and several works using local micro-pattern descriptors have obtained significant results. There are, however, ope...Show More

Abstract:

Facial expression recognition has been studied broadly, and several works using local micro-pattern descriptors have obtained significant results. There are, however, open questions: how to design a discriminative and robust feature descriptor?, how to select expression-related most influential features?, and how to represent the face descriptor exploiting the most salient parts of the face? In this article, we address these three issues to achieve better performance in recognizing facial expressions. First, we propose a new feature descriptor, namely Local Shape Pattern (LSP), that describes the local shape structure of a pixel’s neighborhood based on the prominent directional information by analyzing the statistics of the neighborhood gradient, which allows it to be robust against subtle local noise and distortion. Furthermore, we propose a selection strategy for learning the influential codes being active in the expression affiliated changes by selecting them exhibiting statistical dominance and high spatial variance. Lastly, we learn the size of the salient facial blocks to represent the facial description with the notion that changes in expressions vary in size and location. We conduct person-independent experiments in existing datasets after combining above three proposals, and obtain an improved performance for the facial expression recognition task.
Published in: IEEE Transactions on Affective Computing ( Volume: 13, Issue: 3, 01 July-Sept. 2022)
Page(s): 1322 - 1336
Date of Publication: 18 May 2020

ISSN Information:

Funding Agency:


1 Introduction

This recent years, Automatic Facial Expression Recognition Research (AFER) has put substantial impact on different areas of human-centric computing, such as emotion analysis, affective computing, and robot control [1]. Since different expressions can be characterized with the appearance changes of the face [2], [3], efficient representation of the expression-related appearance-features is a crucial task in AFER. However, due to the different facial traits and external noise factors, the representation of such features should be, simultaneously, discriminative and robust, which is challenging in practice. Moreover, describing the facial expressions using the most active regions on them is beneficial since not all the regions of the face are active in expression changes [4], [5], [6], [7], [8]. Nevertheless, selecting these active regions is challenging due to the diverse facial appearance and expressions of different individuals.

Contact IEEE to Subscribe

References

References is not available for this document.