Abstract:
Facial Expression Recognition (FER) models based on deep learning mostly rely on a supervised train-once-test-all approach. These approaches assume that a model trained o...Show MoreMetadata
Abstract:
Facial Expression Recognition (FER) models based on deep learning mostly rely on a supervised train-once-test-all approach. These approaches assume that a model trained on an in-the-wild facial expression dataset with one type of domain distribution will perform well on a test dataset with a domain distribution shift. However, facial images in real-world can be from different domain distributions from which the model has been trained. However, re-training models on only new domain distributions will severely affect the performance of the previous domain. Re-training on all previous and new data can improve overall performance but is computationally expansive. In this study, we oppose the train-once-test-all approach and propose a buffer-based continual learning approach to enhance the performance of multiple in-the-wild datasets. We propose a model that continually leverages attention to important facial features from the pre-trained model to improve performance in multiple datasets. We validated our model using split-in-the-wild datasets where the dataset is provided to the model in an incremental setting instead of all at once. Furthermore, to evaluate the model performance, we continually used three in-the-wild datasets representing different domains (Domain-FER). Extensive experiments on these datasets reveal that the proposed model achieves better results than other Continual FER models.
Published in: IEEE Transactions on Affective Computing ( Early Access )
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Facial Expressions ,
- Face Recognition ,
- Facial Features ,
- Facial Expression Recognition ,
- Domain Shift ,
- Face Images ,
- Incremental Learning ,
- Training Data ,
- Feature Maps ,
- Stochastic Gradient Descent ,
- Data Streams ,
- Domain Adaptation ,
- Source Domain ,
- Active Memory ,
- Attention Map ,
- Human-robot Interaction ,
- Previous Tasks ,
- Buffer Size ,
- Pre-trained Weights ,
- Loss Of Affinity ,
- Catastrophic Forgetting ,
- Affective Computing ,
- Vision Transformer ,
- Noisy Labels ,
- Artificial Intelligence Training ,
- Joint Training ,
- Transfer Learning ,
- Joint Model ,
- Test Split
- Author Keywords
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Facial Expressions ,
- Face Recognition ,
- Facial Features ,
- Facial Expression Recognition ,
- Domain Shift ,
- Face Images ,
- Incremental Learning ,
- Training Data ,
- Feature Maps ,
- Stochastic Gradient Descent ,
- Data Streams ,
- Domain Adaptation ,
- Source Domain ,
- Active Memory ,
- Attention Map ,
- Human-robot Interaction ,
- Previous Tasks ,
- Buffer Size ,
- Pre-trained Weights ,
- Loss Of Affinity ,
- Catastrophic Forgetting ,
- Affective Computing ,
- Vision Transformer ,
- Noisy Labels ,
- Artificial Intelligence Training ,
- Joint Training ,
- Transfer Learning ,
- Joint Model ,
- Test Split
- Author Keywords