Loading [MathJax]/extensions/MathMenu.js
Touching the Limits of a Dataset in Video-Based Facial Expression Recognition | IEEE Conference Publication | IEEE Xplore

Touching the Limits of a Dataset in Video-Based Facial Expression Recognition


Abstract:

In this paper, we examine the issue of video-based facial emotion recognition algorithms which show excellent performance on some benchmarks, but have much worse accuracy...Show More

Abstract:

In this paper, we examine the issue of video-based facial emotion recognition algorithms which show excellent performance on some benchmarks, but have much worse accuracy in practical applications. For example, the typical error rate of contemporary deep neural networks on the RAVDESS dataset is less than 5%. We argue that such results are obtained only if the split of the whole dataset is incorrect, so that the same persons are present in both training and test sets. It is claimed that it is more frankly to use the actor-based split, in which persons in the training and test sets are disjoint. It is experimentally demonstrated that the near state-of-the-art neural network model pre-trained on the AffectNet dataset achieves 99% accuracy on conventional split of the RAVDESS dataset. However, when we split the dataset by the actors and training and testing sets have only unique persons then the accuracy will be 20-30% lower.
Date of Conference: 05-11 September 2021
Date Added to IEEE Xplore: 17 September 2021
ISBN Information:
Conference Location: Sochi, Russian Federation

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.