SmartBox: Benchmarking Adversarial Detection and Mitigation Algorithms for Face Recognition | IEEE Conference Publication | IEEE Xplore

Scheduled Maintenance: On Tuesday, May 20, IEEE Xplore will undergo scheduled maintenance from 1:00-5:00 PM ET (6:00-10:00 PM UTC). During this time, there may be intermittent impact on performance. We apologize for any inconvenience.

SmartBox: Benchmarking Adversarial Detection and Mitigation Algorithms for Face Recognition


Abstract:

Deep learning models are widely used for various purposes such as face recognition and speech recognition. However, researchers have shown that these models are vulnerabl...Show More

Abstract:

Deep learning models are widely used for various purposes such as face recognition and speech recognition. However, researchers have shown that these models are vulnerable to adversarial attacks. These attacks compute perturbations to generate images that decrease the performance of deep learning models. In this research, we have developed a toolbox, termed as SmartBox, for benchmarking the performance of adversarial attack detection and mitigation algorithms against face recognition. SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms. In this research, Extended Yale Face Database B has been used for generating adversarial examples using various attack algorithms such as DeepFool, Gradient methods, Elastic-Net, and L2 attack. SmartBox provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark. To assist the research community, the code of SmartBox is made available1.
Date of Conference: 22-25 October 2018
Date Added to IEEE Xplore: 25 April 2019
ISBN Information:

ISSN Information:

Conference Location: Redondo Beach, CA, USA

1. Introduction

Deep learning models have achieved state-of-the-art performance in various computer vision related tasks such as object detection and face recognition [18], [24]. However, recent studies suggest that small imperceptible perturbations can act as adversaries for these models and lead to incorrect predictions. As shown in Figure 1, imperceptible adversarial noise can be added in the original image to create perturbed images such that for humans they are exactly the same but the algorithms provide different prediction outputs compared to the original image. Majority of recently proposed face recognition algorithms are based on deep learning and we have observed that existing adversarial attacks may impact face recognition algorithms.

Contact IEEE to Subscribe

References

References is not available for this document.