Abstract:
OpenAI's Codex, a GPT-3like model trained on a large code corpus, has made headlines in and outside of academia. Given a short user-provided description, it is capable of...Show MoreMetadata
Abstract:
OpenAI's Codex, a GPT-3like model trained on a large code corpus, has made headlines in and outside of academia. Given a short user-provided description, it is capable of synthesizing code snippets that are syntactically and semantically valid in most cases. In this work, we want to investigate whether Codex is able to localize and fix bugs, two important tasks in automated program repair. Our initial evaluation uses the multi-language QuixBugs benchmark (40 bugs in both Python and Java). We find that, despite not being trained for APR, Codex is surprisingly effective, and competitive with recent state of the art techniques. Our results also show that Codex is more successful at repairing Python than Java, fixing 50% more bugs in Python.
Date of Conference: 19-19 May 2022
Date Added to IEEE Xplore: 04 July 2022
ISBN Information: