Abstract:
With the advent of adversarial samples, the safety of Artificial Intelligence is of particular concern, as only a small amount of perturbation can be added to mislead a m...Show MoreMetadata
Abstract:
With the advent of adversarial samples, the safety of Artificial Intelligence is of particular concern, as only a small amount of perturbation can be added to mislead a model's judgment. Therefore there is an urgent need for research on models that can resist adversarial perturbations. To alleviate this problem, we first analyze the vulnerability of adversarial samples and propose a uniform perspective robust model for object detection that accurately identifies both clean and ad-versarial samples. We propose a robust object detection based on the contrastive learning perspective (RCP), which can learn features from both clean and adversarial samples from a fine-grained perspective, and can recognize adversarial samples more accurately. Extensive experiments on PASCAL VOC and MS COCO show that our proposed method only degrades the clean sample detection performance by a small amount in exchange for a large robustness improvement against adversarial attacks, achieving state of the art results.
Date of Conference: 14-17 November 2023
Date Added to IEEE Xplore: 25 December 2023
ISBN Information:
ISSN Information:
Conference Location: Abu Dhabi, United Arab Emirates