Abstract:
It is imperative to think about the security requirements of humanizing artificial intelligence because facial expression recognition (FER) is necessary for human-compute...Show MoreMetadata
Abstract:
It is imperative to think about the security requirements of humanizing artificial intelligence because facial expression recognition (FER) is necessary for human-computer interaction. According to past studies, adding relatively modest perturbations to the input vector makes it simple to change the output of deep learning (DL)-based models. However, research on adversarial examples that target FER systems is still in its infancy. Thus, we analyze a black-box attack under a very constrained condition, where only one pixel can be changed in this study. To do this, we suggest a novel technique based on particle swarm optimization (PSO) that generates adversarial perturbations at the level of a single pixel in a superpixel of a face image to fool two popular DL-based FER systems, such as FER-net, ResNet50, and VGG16. All the experiments are performed on three publicly available benchmark datasets, such as Japanese Female Facial Expression (JAFFE), Extended Cohn-Kanade (CK+), and FED-RO, in two various attack scenarios, such as untargeted and targeted. On the one hand, the success rates of the proposed method on JAFFE, CK+, and FED-RO are 62.5%, 26.12%, and 42.5%, respectively, while fooling FER-net in untargeted attack scenarios. On the other hand, the success rates of the proposed method on JAFFE, CK+, and FED-RO are 77.14%, 50%, and 48.71%, respectively, while fooling VGG16 in an untargeted attack scenario. The findings demonstrate that the proposed method outranks one well-known differential evolution-based pioneering approach.
Date of Conference: 06-08 November 2024
Date Added to IEEE Xplore: 29 November 2024
ISBN Information: