Loading [MathJax]/extensions/MathMenu.js
LLM-based Interactive Code Generation: Empirical Evaluation | IEEE Conference Publication | IEEE Xplore

LLM-based Interactive Code Generation: Empirical Evaluation


Abstract:

Recently, large language models (LLMs), those pretrained on code, have demonstrated strong capabilities in generating programs from informal natural language intent. Howe...Show More

Abstract:

Recently, large language models (LLMs), those pretrained on code, have demonstrated strong capabilities in generating programs from informal natural language intent. However, LLM -generated code is prone to bugs. Developers interacting with LLMs seek trusted code and, ideally, clear indications of potential bugs and vulnerabilities. Verified code can mitigate potential business risks associated with adopting generated code. We use model-agnostic framework CodePatchLLM, an extension for LLM that utilizes Svace feedback to enhance code generation quality. We evaluate CodePatchLLM on four popular LLMs across three datasets. Our experiments show an average absolute reduction of 19.1 % in static analyzer warnings for Java across all datasets and models, while preserving pass@ 1 code generation accuracy.
Date of Conference: 11-12 December 2024
Date Added to IEEE Xplore: 28 February 2025
ISBN Information:

ISSN Information:

Conference Location: Moscow, Russian Federation

Contact IEEE to Subscribe

References

References is not available for this document.