I. Introduction
Recent studies generate adversarial examples by adding imperceptible noise to the whole image to disturb the classification of deep neural networks, such as FGSM [1], MI-FGSM [2], PGD [3], etc. However, the global noise is usually generated for each specific image, making it hard to apply in the physical world. Therefore, adversarial patch is introduced to address the versatility issue [4]. In contrast to global noise, the adversarial patch is constrained in a small area and more effective to mislead classifiers. Then the adversarial patch can be printed and pasted in the scene to launch a physical attack.