Abstract:
Recently, large language models (LLMs), those pretrained on code, have demonstrated strong capabilities in generating programs from informal natural language intent. Howe...Show MoreMetadata
Abstract:
Recently, large language models (LLMs), those pretrained on code, have demonstrated strong capabilities in generating programs from informal natural language intent. However, LLM -generated code is prone to bugs. Developers interacting with LLMs seek trusted code and, ideally, clear indications of potential bugs and vulnerabilities. Verified code can mitigate potential business risks associated with adopting generated code. We use model-agnostic framework CodePatchLLM, an extension for LLM that utilizes Svace feedback to enhance code generation quality. We evaluate CodePatchLLM on four popular LLMs across three datasets. Our experiments show an average absolute reduction of 19.1 % in static analyzer warnings for Java across all datasets and models, while preserving pass@ 1 code generation accuracy.
Published in: 2024 Ivannikov Ispras Open Conference (ISPRAS)
Date of Conference: 11-12 December 2024
Date Added to IEEE Xplore: 28 February 2025
ISBN Information: