Loading [MathJax]/extensions/MathMenu.js
Detecting the presence of social bias in GPT-3.5 using association tests | IEEE Conference Publication | IEEE Xplore

Detecting the presence of social bias in GPT-3.5 using association tests


Abstract:

Bias in AI is becoming a rising concern as more and more systems rely on generative artificial intelligence. The bias in the training data used to train generative AI is ...Show More

Abstract:

Bias in AI is becoming a rising concern as more and more systems rely on generative artificial intelligence. The bias in the training data used to train generative AI is reflected in its applications such as writing, drawing, advising and decision-making. Since the advent of GPT-3, a growing number of people have been relying on it. Content generated by GPT-3 has the potential to influence people on multiple levels. If this content is biased only towards a certain proportion of the population, it may have grave consequences including legal and ethical concerns. The aim of this study is to detect the associations in large language generative models. This study presents a test which can be used as an effective medium to test high-level bias in chat-based GPT models such as ChatGPT. This test measures word associations and is similar in essence to the Implicit Association Test (IAT) which is used to measure bias in humans. The test gives insights into the bias of the model with respect to the explicit associations found by querying the model. In this study, we use this test to detect gender-career and racial bias in the ChatGPT-3.5 model. We prove that the model is negatively biased towards women and people of the Arabic race.
Date of Conference: 06-07 October 2023
Date Added to IEEE Xplore: 23 January 2024
ISBN Information:
Conference Location: Mumbai, India

Contact IEEE to Subscribe

References

References is not available for this document.