Loading [MathJax]/extensions/MathMenu.js
Improving Adversarial Images Using Activation Maps | IEEE Conference Publication | IEEE Xplore

Improving Adversarial Images Using Activation Maps


Abstract:

Deep Neural Networks are currently gaining a lot of attention for their near human-level performances in tasks such as image classification, object detection, etc. As a r...Show More

Abstract:

Deep Neural Networks are currently gaining a lot of attention for their near human-level performances in tasks such as image classification, object detection, etc. As a result, they are also being deployed in security critical and real time systems such as face recognition and autonomous cars. This requires models to be robust to changes to the input. However, recent literature has showed that they are easily fooled when human imperceptible noise, also known as adversarial noise, is added to the input. By exploiting this adversarial nature, various adversarial attacks and defences against these attacks have been introduced so far. In this paper, we propose a new approach which can be used alongside any existing adversarial attack to further reduce the L2 distance between the generated adversarial image and the original image. Our approach can also be thought of as a new adversarial attack built on top of an existing attack. We evaluated our approach on the ImageNet dataset. Using our approach, we were able to reduce the L2 distance for around 60-70% of the images sampled from the ImageNet dataset.
Date of Conference: 24-26 May 2019
Date Added to IEEE Xplore: 05 August 2019
ISBN Information:
Conference Location: Chongqing, China

Contact IEEE to Subscribe

References

References is not available for this document.