On Hardware Security Bug Code Fixes by Prompting Large Language Models | IEEE Journals & Magazine | IEEE Xplore

On Hardware Security Bug Code Fixes by Prompting Large Language Models


Abstract:

Novel AI-based code-writing Large Language Models (LLMs) such as OpenAI’s Codex have demonstrated capabilities in many coding-adjacent domains. In this work, we consider ...Show More

Abstract:

Novel AI-based code-writing Large Language Models (LLMs) such as OpenAI’s Codex have demonstrated capabilities in many coding-adjacent domains. In this work, we consider how LLMs may be leveraged to automatically repair identified security-relevant bugs present in hardware designs by generating replacement code. We focus on bug repair in code written in Verilog. For this study, we curate a corpus of domain-representative hardware security bugs. We then design and implement a framework to quantitatively evaluate the performance of any LLM tasked with fixing the specified bugs. The framework supports design space exploration of prompts (i.e., prompt engineering) and identifying the best parameters for the LLM. We show that an ensemble of LLMs can repair all fifteen of our benchmarks. This ensemble outperforms a state-of-the-art automated hardware bug repair tool on its own suite of bugs. These results show that LLMs have the ability to repair hardware security bugs and the framework is an important step towards the ultimate goal of an automated end-to-end bug repair tool.
Page(s): 4043 - 4057
Date of Publication: 07 March 2024

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.