Abstract:
Studies have shown that deep neural networks (DNNs) are susceptible to adversarial attacks, which can cause misclassification. The adversarial attack problem can be regar...Show MoreMetadata
Abstract:
Studies have shown that deep neural networks (DNNs) are susceptible to adversarial attacks, which can cause misclassification. The adversarial attack problem can be regarded as an optimization problem, then the genetic algorithm (GA) that is problem-independent can naturally be designed to solve the optimization problem to generate effective adversarial examples. Considering the dimensionality curse in the image processing field, traditional genetic algorithms in high-dimensional problems often fall into local optima. Therefore, we propose a GA with multiple fitness functions (MF-GA). Specifically, we divide the evolution process into three stages, i.e., exploration stage, exploitation stage, and stable stage. Besides, different fitness functions are used for different stages, which could help the GA to jump away from the local optimum.Experiments are conducted on three datasets, and four classic algorithms as well as the basic GA are adopted for comparisons. Experimental results demonstrate that MF-GA is an effective black-box attack method. Furthermore, although MF-GA is a black-box attack method, experimental results demonstrate the performance of MF-GA under the black-box environments is competitive when comparing to four classic algorithms under the white-box attack environments. This shows that evolutionary algorithms have great potential in adversarial attacks.
Published in: 2021 IEEE Congress on Evolutionary Computation (CEC)
Date of Conference: 28 June 2021 - 01 July 2021
Date Added to IEEE Xplore: 09 August 2021
ISBN Information: