Scaling Resilient Adversarial Patch | IEEE Conference Publication | IEEE Xplore

Scaling Resilient Adversarial Patch


Abstract:

Deep neural networks are easily affected by adversarial patches, causing prediction errors. However, existing adversarial patches are trained with specific model-dataset ...Show More

Abstract:

Deep neural networks are easily affected by adversarial patches, causing prediction errors. However, existing adversarial patches are trained with specific model-dataset pairs and only effective for the images with the predetermined size in the dataset. The semantic information of the patch will be distorted when scaling the image, which is a common preprocessing process in practical applications. In this paper, we propose SRA Patch (Scaling-Resilient Adversarial Patch), a new adversarial patch resilient to image scaling. Specifically, we generate the patch in a block-wise way and utilize the superpixel method to resist the loss of semantic information during scaling. Further, we introduce the ensemble model as a Black-Box indicator to address the noise space shrinking issue, which is caused by the small size and the block operation of SRA patch. Finally, we leverage Class Activation Mapping to extract the region with salient features as the final patch to improve the ratio of effective semantic features on the patch to remain during scaling. Extensive experiments have demonstrated that our SRA patch has much stronger attack capability and scaling robustness than existing methods.
Date of Conference: 04-07 October 2021
Date Added to IEEE Xplore: 13 December 2021
ISBN Information:

ISSN Information:

Conference Location: Denver, CO, USA

Funding Agency:


I. Introduction

Recent studies generate adversarial examples by adding imperceptible noise to the whole image to disturb the classification of deep neural networks, such as FGSM [1], MI-FGSM [2], PGD [3], etc. However, the global noise is usually generated for each specific image, making it hard to apply in the physical world. Therefore, adversarial patch is introduced to address the versatility issue [4]. In contrast to global noise, the adversarial patch is constrained in a small area and more effective to mislead classifiers. Then the adversarial patch can be printed and pasted in the scene to launch a physical attack.

Contact IEEE to Subscribe

References

References is not available for this document.