I. Introduction
Christian Szegedy [1] found the vulnerability of deep learning in 2013, that is, we can cause the network to misclassify an image by applying a certain hardly perceptible perturbation, which is found by maximizing the network's prediction error [1]. The images with perturbations that can attack the network are called "adversarial examples ". Since then, adversarial examples arouse the interest of researchers. Adversarial attack algorithms have continued to innovate one after the other. Early mainstream attack methods mostly focused on the classification models, such as some white-box attacks: L-BFGS [1], FGSM [2], Deepfool [3], JMSA [4], CW [5], PGD [6] and some black-box attacks: One-pixel Attack [7], UPSET and ANGRI [8], Houdini [9]. With the principle of reducing the number of disturbed pixels, these attack methods generate adversarial examples by adding noise perturbations to the whole image. However, Brown et al. innovatively proposed Adversarial patch [10] in 2018, which is different from the prior attacks. Although the patch perturbation is not small or imperceptible, it can directly achieve a universal real-time attack once the patch has been generated. Later, more and more patch attacks have emerged in succession, such as RP2 [11], LaVAN [12] and PS-GAN [13]. Therefore, according to the existing attack algorithms, we believe the attacks that embedd perturbations into images can be divided into noise perturbations and patch perturbations.