Processing math: 25%
Goal-Guided Generative Prompt Injection Attack on Large Language Models | IEEE Conference Publication | IEEE Xplore

Goal-Guided Generative Prompt Injection Attack on Large Language Models


Abstract:

Current large language models (LLMs) provide a strong foundation for large-scale user-oriented natural language tasks. Numerous users can easily inject adversarial text o...Show More

Abstract:

Current large language models (LLMs) provide a strong foundation for large-scale user-oriented natural language tasks. Numerous users can easily inject adversarial text or instructions through the user interface, thus causing LLM model security challenges. Although there is much research on prompt injection attacks, most black-box attacks use heuristic strategies. It is unclear how these heuristic strategies relate to the success rate of attacks and thus effectively improve model robustness. To solve this problem, we redefine the goal of the attack: to maximize the KL divergence between the conditional probabilities of the clean text and the adversarial text. Furthermore, we prove that maximizing the KL divergence is equivalent to maximizing the Mahalanobis distance between the embedded representation x and x^{\prime} of the clean text and the adversarial text when the conditional probability is a Gaussian distribution and gives a quantitative relationship on x and x^{\prime}. Then we designed a simple and effective goal-guided generative prompt injection strategy (G2PIA) to find an injection text that satisfies specific constraints to achieve the optimal attack effect approximately. Notably, our attack method is a query-free black-box attack method with a low computational cost. Experimental results on seven LLM models and four datasets show the effectiveness of our attack method.
Date of Conference: 09-12 December 2024
Date Added to IEEE Xplore: 21 February 2025
ISBN Information:

ISSN Information:

Conference Location: Abu Dhabi, United Arab Emirates

I. Introduction

Large Language Models (LLMs) [1], [2] are evolving rapidly in architecture and applications. As they become more and more deeply integrated into our lives, the urgency of reviewing their security properties increases. Many previous studies [3], [4] have shown that LLMs whose instructions are adjusted through reinforcement learning with human feedback (RLHF) are highly vulnerable to adversarial attacks. Therefore, studying adversarial attacks on large language models is of great significance, which can help researchers understand the security and robustness of large language models [5]–[7] and thus design more powerful and robust models to prevent such attacks.

Contact IEEE to Subscribe

References

References is not available for this document.