Playing Against Deep-Neural-Network-Based Object Detectors: A Novel Bidirectional Adversarial Attack Approach | IEEE Journals & Magazine | IEEE Xplore

Playing Against Deep-Neural-Network-Based Object Detectors: A Novel Bidirectional Adversarial Attack Approach


Impact Statement:Deep-learning-based object detection techniques have been widely used in various computer-vision-involved tasks, such as autonomous driving, face recognition, and so fort...Show More

Abstract:

In the fields of deep learning and computer vision, the security of object detection models has received extensive attention. Revealing the security vulnerabilities resul...Show More
Impact Statement:
Deep-learning-based object detection techniques have been widely used in various computer-vision-involved tasks, such as autonomous driving, face recognition, and so forth. However, there are circumstances where people prefer to make their images undetectable, maybe due to privacy or confidentiality concerns. For such needs, we propose a novel method that can reduce the success rate of advanced mainstream object detectors by 83% and can support various types of images. The proposed method adds carefully designed perturbations to the images, and the corresponding processing process is the fastest among the existing antidetection methods. By providing practitioners with easy-to-implement algorithms, we expect that the proposed method can help reduce the risk and the misuse of object detection techniques.

Abstract:

In the fields of deep learning and computer vision, the security of object detection models has received extensive attention. Revealing the security vulnerabilities resulting from adversarial attacks has become one of the most important research directions. Existing studies show that object detection models can also be threatened by adversarial examples, just like other deep-neural-network-based models, e.g., those for classification. In this article, we propose a bidirectional adversarial attack method. First, the added perturbation pushes the prediction results given by the object detectors far away from the ground-truth class while getting close to the background class. Second, a confidence loss function is designed for the region proposal network to reduce the foreground scores. Third, the adversarial examples are generated by a pretrained autoencoder, and the model is trained using an adversarial approach, which can enhance the similarity between the adversarial examples and the o...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 3, Issue: 1, February 2022)
Page(s): 20 - 28
Date of Publication: 25 August 2021
Electronic ISSN: 2691-4581

Contact IEEE to Subscribe

References

References is not available for this document.