1. Introduction
Deep learning models have achieved state-of-the-art performance in various computer vision related tasks such as object detection and face recognition [18], [24]. However, recent studies suggest that small imperceptible perturbations can act as adversaries for these models and lead to incorrect predictions. As shown in Figure 1, imperceptible adversarial noise can be added in the original image to create perturbed images such that for humans they are exactly the same but the algorithms provide different prediction outputs compared to the original image. Majority of recently proposed face recognition algorithms are based on deep learning and we have observed that existing adversarial attacks may impact face recognition algorithms.