Loading [MathJax]/extensions/MathMenu.js
Understanding the Robustness of Transformer-Based Code Intelligence via Code Transformation: Challenges and Opportunities | IEEE Journals & Magazine | IEEE Xplore

Understanding the Robustness of Transformer-Based Code Intelligence via Code Transformation: Challenges and Opportunities


Abstract:

Transformer-based models have demonstrated state-of-the-art performance in various intelligent coding tasks such as code comment generation and code completion. Previous ...Show More

Abstract:

Transformer-based models have demonstrated state-of-the-art performance in various intelligent coding tasks such as code comment generation and code completion. Previous studies show that deep learning models are sensitive to input variations, but few have systematically studied the robustness of Transformer under perturbed input code. In this work, we empirically study the effect of semantic-preserving code transformations on the performance of Transformers. Specifically, 27 and 24 code transformation strategies are implemented for two popular programming languages, Java and Python, respectively. To facilitating analysis, the strategies are grouped into five categories: block transformation, insertion / deletion transformation, grammatical statement transformation, grammatical token transformation, and identifier transformation. Experiments on three popular code intelligence tasks, including code completion, code summarization, and code search, demonstrate that insertion / deletion transformation and identifier transformation have the greatest impact on the performance of Transformers. Our results also suggest that Transformers based on abstract syntax trees (ASTs) show more robust performance than models based only on code sequences under most code transformations. Besides, the design of positional encoding can impact the robustness of Transformers under code transformations. We also investigate substantial code transformations at the strategy level to expand our study and explore other factors influencing the robustness of Transformers. Furthermore, we explore applications of code transformations. Based on our findings, we distill insights about the challenges and opportunities for Transformer-based code intelligence from various perspectives.
Published in: IEEE Transactions on Software Engineering ( Volume: 51, Issue: 2, February 2025)
Page(s): 521 - 547
Date of Publication: 16 January 2025

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.