Continual Facial Features Transfer for Facial Expression Recognition | IEEE Journals & Magazine | IEEE Xplore

Abstract:

Facial Expression Recognition (FER) models based on deep learning mostly rely on a supervised train-once-test-all approach. These approaches assume that a model trained o...Show More

Abstract:

Facial Expression Recognition (FER) models based on deep learning mostly rely on a supervised train-once-test-all approach. These approaches assume that a model trained on an in-the-wild facial expression dataset with one type of domain distribution will perform well on a test dataset with a domain distribution shift. However, facial images in real-world can be from different domain distributions from which the model has been trained. However, re-training models on only new domain distributions will severely affect the performance of the previous domain. Re-training on all previous and new data can improve overall performance but is computationally expansive. In this study, we oppose the train-once-test-all approach and propose a buffer-based continual learning approach to enhance the performance of multiple in-the-wild datasets. We propose a model that continually leverages attention to important facial features from the pre-trained model to improve performance in multiple datasets. We validated our model using split-in-the-wild datasets where the dataset is provided to the model in an incremental setting instead of all at once. Furthermore, to evaluate the model performance, we continually used three in-the-wild datasets representing different domains (Domain-FER). Extensive experiments on these datasets reveal that the proposed model achieves better results than other Continual FER models.
Published in: IEEE Transactions on Affective Computing ( Early Access )
Page(s): 1 - 14
Date of Publication: 15 April 2025

ISSN Information:


Contact IEEE to Subscribe