A recent study conducted by computer and information science researchers from the United Kingdom and Brazil has raised concerns about the objectivity of ChatGPT, a widely used large language model (LLM)-based chatbot. The researchers, Fabio Motoki, Valdemar Pinho, and Victor Rodrigues, claim to have found compelling evidence that ChatGPT exhibits a significant political bias towards the left side of the political spectrum. Their findings were published in the renowned journal Public Choice on August 17.
According to the study, texts generated by LLMs like ChatGPT can contain factual errors and biases that may mislead readers. This exacerbates existing political bias issues already present in traditional media. The implications of these findings are far-reaching, affecting policymakers, stakeholders in media, politics, and academia. In fact, the study authors assert that the presence of political bias in ChatGPT’s responses can have the same negative political and electoral effects as biases found in traditional and social media platforms.
The research methodology employed in the study was based on an empirical approach, using a series of questionnaires provided to ChatGPT. To begin with, the researchers asked ChatGPT to answer political compass questions that aimed to determine the respondent’s political orientation. They also conducted tests where ChatGPT assumed the persona of an average Democrat or Republican.
The results of these tests strongly suggest that ChatGPT’s algorithm displays a default bias towards responses aligned with the Democratic spectrum in the United States. Additionally, the researchers argued that this political bias is not limited to the U.S. context. They found that ChatGPT also exhibited biases towards Lula in Brazil and the Labour Party in the United Kingdom. The study emphasizes that these biases are not merely mechanical results but indicate a clear bias in the algorithm itself.
Despite their efforts, the researchers encountered challenges in determining the exact source of ChatGPT’s political bias. They attempted to access any knowledge about biased data by forcing ChatGPT into a developer mode, but the large language model remained unequivocal, affirming that both ChatGPT and OpenAI, the organization behind it, are unbiased. OpenAI has not yet responded to requests for comment regarding these findings.
The study authors posit that ChatGPT’s bias may stem from two potential sources: the training data and the algorithm itself. They suggest that further research is needed to disentangle these two components and assess their respective contributions to the bias observed in ChatGPT’s output.
Political biases, however, are not the only concerns associated with artificial intelligence tools like ChatGPT. As the widespread adoption of AI continues, various risks have been flagged by individuals worldwide. These concerns range from privacy issues to the challenges posed in educational settings. In fact, certain AI tools, such as AI content generators, have raised concerns related to the identity verification process on cryptocurrency exchanges.
In conclusion, the recent study highlighting ChatGPT’s political bias has raised important questions about the objectivity and trustworthiness of large language models. It underscores the need for further research to understand the sources of bias in AI systems and the wider ramifications for media, politics, and academia. Additionally, as AI tools become increasingly prevalent, it is crucial to address privacy and educational challenges while ensuring that they do not pose risks to important processes such as identity verification.