Loading [MathJax]/extensions/MathZoom.js
PGD-Optimized Patch and Noise Joint Embedded Adversarial Example for Faster RCNN and YOLOv4 | IEEE Conference Publication | IEEE Xplore

PGD-Optimized Patch and Noise Joint Embedded Adversarial Example for Faster RCNN and YOLOv4


Abstract:

Faster RCNN and YOLO have stable and outstanding performance as the state of the art and widely used object detectors in real applications. However, some researches and s...Show More

Abstract:

Faster RCNN and YOLO have stable and outstanding performance as the state of the art and widely used object detectors in real applications. However, some researches and security incidents show that Faster RCNN and YOLO can be attacked by adversarial examples, resulting in errors or invalid detection results. According to the existing attack methods on computer vision, the imperceptible perturbations embedded into the initial images can be categorized into noise disturbance and patch disturbance. In this paper, we proposed an attack method that combines the advantages of the two categories. Our attack is based on PGD iterative attacks and patch optimization. We can attack Faster RCNN and YOLOv4 successfully in less time. And the adversarial examples generated are similar to the original images, which make the attack more stealthy.
Date of Conference: 25-27 September 2021
Date Added to IEEE Xplore: 11 November 2021
ISBN Information:
Conference Location: Nanning, China

Funding Agency:


I. Introduction

Christian Szegedy [1] found the vulnerability of deep learning in 2013, that is, we can cause the network to misclassify an image by applying a certain hardly perceptible perturbation, which is found by maximizing the network's prediction error [1]. The images with perturbations that can attack the network are called "adversarial examples ". Since then, adversarial examples arouse the interest of researchers. Adversarial attack algorithms have continued to innovate one after the other. Early mainstream attack methods mostly focused on the classification models, such as some white-box attacks: L-BFGS [1], FGSM [2], Deepfool [3], JMSA [4], CW [5], PGD [6] and some black-box attacks: One-pixel Attack [7], UPSET and ANGRI [8], Houdini [9]. With the principle of reducing the number of disturbed pixels, these attack methods generate adversarial examples by adding noise perturbations to the whole image. However, Brown et al. innovatively proposed Adversarial patch [10] in 2018, which is different from the prior attacks. Although the patch perturbation is not small or imperceptible, it can directly achieve a universal real-time attack once the patch has been generated. Later, more and more patch attacks have emerged in succession, such as RP2 [11], LaVAN [12] and PS-GAN [13]. Therefore, according to the existing attack algorithms, we believe the attacks that embedd perturbations into images can be divided into noise perturbations and patch perturbations.

Contact IEEE to Subscribe

References

References is not available for this document.