Abstract:
Large pre-trained language models have shown strong capabilities in the field of natural language processing, and in recent years research demonstrated that they can also...Show MoreMetadata
Abstract:
Large pre-trained language models have shown strong capabilities in the field of natural language processing, and in recent years research demonstrated that they can also produce surprising results from natural language descriptions in automatic code-generation applications. Although such models have performed well in a variety of domains, there is evidence that they can be affected by adversarial attacks, Consequently, it has become important to measure the robustness of models by producing adversarial examples. In this study, we proposed an attack method called the Modifier for Code Generation Attack (M-CGA), which is the first time a white box adversarial attack has been applied to the field of code generation. The M-CGA method measures the robustness of a model by producing adversarial examples that can cause the model to produce code that is incorrect or does not meet the criteria for use. Preliminary experimental results showed the M-CGA method to be an effective attack method, providing a new research direction in automatic code synthesis.
Published in: 2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)
Date of Conference: 21-24 March 2023
Date Added to IEEE Xplore: 15 May 2023
ISBN Information: