Abstract:
Large language models (LLMs) are renowned for their advanced capabilities, significantly enhancing artificial intelligence. However, these advancements have also raised i...Show MoreMetadata
Abstract:
Large language models (LLMs) are renowned for their advanced capabilities, significantly enhancing artificial intelligence. However, these advancements have also raised increasing concerns about privacy and security. To address these issues, we developed a three-tiered framework designed to evaluate privacy in language systems through progressively complex tests. Our primary goal is to measure the sensitivity of LLMs to private information, studying their ability to identify, manage, and protect sensitive data across different scenarios. This systematic evaluation helps determine how well these models comply with privacy guidelines and the effectiveness of their safeguards against breaches. Our findings suggest that current Chinese LLMs show widespread privacy protection issues, indicating that this challenge remains common and may pose corresponding privacy risks in applications based on these models.
Published in: IEEE MultiMedia ( Early Access )
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords