Analyzing the Robustness of Deep Learning Against Adversarial Examples | IEEE Conference Publication | IEEE Xplore

Analyzing the Robustness of Deep Learning Against Adversarial Examples


Abstract:

Recent studies have shown the vulnerability of many deep learning algorithms to adversarial examples, which an attacker obtains by adding subtle perturbation to benign in...Show More

Abstract:

Recent studies have shown the vulnerability of many deep learning algorithms to adversarial examples, which an attacker obtains by adding subtle perturbation to benign inputs in order to cause misbehavior of deep learning. For instance, an attacker can add carefully selected noise to a panda image so that the resulting image is still a panda to a human being but is predicted as a gibbon by the deep learning algorithm. As a first step to propose effective defense mechanisms against such adversarial examples, we analyze the robustness of deep learning against adversarial examples. Specifically, we prove a strict lower bound for the minimum ℓp distortion of a data point to obtain an adversarial example.
Date of Conference: 02-05 October 2018
Date Added to IEEE Xplore: 07 February 2019
ISBN Information:
Conference Location: Monticello, IL, USA

Contact IEEE to Subscribe

References

References is not available for this document.