Loading [MathJax]/extensions/MathMenu.js
Can OpenAI's Codex Fix Bugs?: An evaluation on QuixBugs | IEEE Conference Publication | IEEE Xplore

Can OpenAI's Codex Fix Bugs?: An evaluation on QuixBugs


Abstract:

OpenAI's Codex, a GPT-3like model trained on a large code corpus, has made headlines in and outside of academia. Given a short user-provided description, it is capable of...Show More

Abstract:

OpenAI's Codex, a GPT-3like model trained on a large code corpus, has made headlines in and outside of academia. Given a short user-provided description, it is capable of synthesizing code snippets that are syntactically and semantically valid in most cases. In this work, we want to investigate whether Codex is able to localize and fix bugs, two important tasks in automated program repair. Our initial evaluation uses the multi-language QuixBugs benchmark (40 bugs in both Python and Java). We find that, despite not being trained for APR, Codex is surprisingly effective, and competitive with recent state of the art techniques. Our results also show that Codex is more successful at repairing Python than Java, fixing 50% more bugs in Python.
Date of Conference: 19-19 May 2022
Date Added to IEEE Xplore: 04 July 2022
ISBN Information:
Conference Location: Pittsburgh, PA, USA

Contact IEEE to Subscribe

References

References is not available for this document.