Abstract:
Deep neural network-based image classification models are vulnerable to adversarial examples, which are meticulously crafted to mislead the model by adding perturbations ...Show MoreMetadata
Abstract:
Deep neural network-based image classification models are vulnerable to adversarial examples, which are meticulously crafted to mislead the model by adding perturbations to clean images. Although adversarial training demonstrates outstanding performance in enhancing models robustness against adversarial examples, it often incurs the expense of accuracy. To address this problem, this article proposes a strategy to achieve a better tradeoff between accuracy and robustness, which mainly consists of symbol perturbations and examples mixing. First, we employ a symbol processing approach for randomly generated initial perturbations, which makes model identify the correct parameter attack direction faster during the training process. Second, we put forward a methodology that utilizes a mixture of different examples to generate more distinct adversarial features. Further, we utilize scaling conditions for tensor feature modulation, enabling the model to achieve both improved accuracy and robustness after learning more diverse adversarial features. Finally, we conduct extensive experiments to show the feasibility and effectiveness of the proposed methods.
Published in: IEEE Journal on Miniaturization for Air and Space Systems ( Volume: 5, Issue: 4, December 2024)