Abstract:
Deep neural networks are easily affected by adversarial patches, causing prediction errors. However, existing adversarial patches are trained with specific model-dataset ...Show MoreMetadata
Abstract:
Deep neural networks are easily affected by adversarial patches, causing prediction errors. However, existing adversarial patches are trained with specific model-dataset pairs and only effective for the images with the predetermined size in the dataset. The semantic information of the patch will be distorted when scaling the image, which is a common preprocessing process in practical applications. In this paper, we propose SRA Patch (Scaling-Resilient Adversarial Patch), a new adversarial patch resilient to image scaling. Specifically, we generate the patch in a block-wise way and utilize the superpixel method to resist the loss of semantic information during scaling. Further, we introduce the ensemble model as a Black-Box indicator to address the noise space shrinking issue, which is caused by the small size and the block operation of SRA patch. Finally, we leverage Class Activation Mapping to extract the region with salient features as the final patch to improve the ratio of effective semantic features on the patch to remain during scaling. Extensive experiments have demonstrated that our SRA patch has much stronger attack capability and scaling robustness than existing methods.
Date of Conference: 04-07 October 2021
Date Added to IEEE Xplore: 13 December 2021
ISBN Information: