Abstract:
While gender biases in large language models (LLMs) have been identified, their nuances in mental health contexts remain under-researched but are critical for ensuring ac...Show MoreMetadata
Abstract:
While gender biases in large language models (LLMs) have been identified, their nuances in mental health contexts remain under-researched but are critical for ensuring accurate and inclusive AI diagnostics. We address this gap by investigating gender biases in GPT-3.5 and GPT-4, focusing on Borderline Personality Disorder (BPD) and Narcissistic Personality, Disorder (NPD), selected for their recognized clinical biases: women with BPD and men with NPD. We explore these biases through diagnostic reasoning and clinical vignette generation tasks. Diagnostic tests reveal that both GPT-3.5 and GPT-4 exhibit biases, particularly against women, though GPT-4 shows reduced bias and improved performance. In vignette generation, both models, especially GPT-4, frequently depict women with BPD, Vignettes featuring men with NPD score higher in positive sentiment, objectivity, and readability. These results emphasize the importance of addressing gender biases in mental health AI to prevent stereotyping and misinformation.
Date of Conference: 23-25 August 2024
Date Added to IEEE Xplore: 26 September 2024
ISBN Information: