I. Introduction
Benefiting from the large-scale labeled data, revolutionary breakthroughs have been achieved by Deep Neural Networks (DNNs) in various computer vision tasks [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11]. These advanced researches have injected vitality into the study of pattern recognition technology, and set off a research upsurge that last for more than ten years. However, there also exists a non-negligible safety concern in DNNs, i.e., the adversarial vulnerability of DNNs-based methods [12], [13], which means that adding human-imperceptible perturbations to clean images can mislead the excellent trained models to a large extent. Given this great security concern, more attention across different fields has been paid to the community of adversarial learning to disclose the principle behind this issue [14], [15], [16], [17], [18], [19], [20], [21], [22], where one of the important topics is to study how to attack the DNN-based models.