Loading [MathJax]/extensions/MathMenu.js
PoisonPrompt: Backdoor Attack on Prompt-Based Large Language Models | IEEE Conference Publication | IEEE Xplore

PoisonPrompt: Backdoor Attack on Prompt-Based Large Language Models


Abstract:

Prompts have significantly improved the performance of pre-trained Large Language Models (LLMs) on various downstream tasks recently, making them increasingly indispensab...Show More

Abstract:

Prompts have significantly improved the performance of pre-trained Large Language Models (LLMs) on various downstream tasks recently, making them increasingly indispensable for a diverse range of LLM application scenarios. However, the backdoor vulnerability, a serious security threat that can maliciously alter the victim model’s normal predictions, has not been sufficiently explored for prompt-based LLMs. In this paper, we present PoisonPrompt, a novel backdoor attack capable of successfully compromising both hard and soft prompt-based LLMs. We evaluate the effectiveness, fidelity, and robustness of PoisonPrompt through extensive experiments on three popular prompt methods, using six datasets and three widely used LLMs. Our findings highlight the potential security threats posed by backdoor attacks on prompt-based LLMs and emphasize the need for further research in this area.
Date of Conference: 14-19 April 2024
Date Added to IEEE Xplore: 18 March 2024
ISBN Information:

ISSN Information:

Conference Location: Seoul, Korea, Republic of

Contact IEEE to Subscribe

References

References is not available for this document.