Abstract:
Bias in AI is becoming a rising concern as more and more systems rely on generative artificial intelligence. The bias in the training data used to train generative AI is ...Show MoreMetadata
Abstract:
Bias in AI is becoming a rising concern as more and more systems rely on generative artificial intelligence. The bias in the training data used to train generative AI is reflected in its applications such as writing, drawing, advising and decision-making. Since the advent of GPT-3, a growing number of people have been relying on it. Content generated by GPT-3 has the potential to influence people on multiple levels. If this content is biased only towards a certain proportion of the population, it may have grave consequences including legal and ethical concerns. The aim of this study is to detect the associations in large language generative models. This study presents a test which can be used as an effective medium to test high-level bias in chat-based GPT models such as ChatGPT. This test measures word associations and is similar in essence to the Implicit Association Test (IAT) which is used to measure bias in humans. The test gives insights into the bias of the model with respect to the explicit associations found by querying the model. In this study, we use this test to detect gender-career and racial bias in the ChatGPT-3.5 model. We prove that the model is negatively biased towards women and people of the Arabic race.
Published in: 2023 International Conference on Advanced Computing Technologies and Applications (ICACTA)
Date of Conference: 06-07 October 2023
Date Added to IEEE Xplore: 23 January 2024
ISBN Information: