Most AI chatbots 'lean left', but can be 'taught' other political leanings: study

Most AI chatbots 'lean left', but can be 'taught' other political leanings: study

Most AI chatbots 'lean left', but can be 'taught' other political leanings: study

Chatbots are AI-based Large Language Models (LLMs), which are trained on large amounts of textual data and are, therefore, able to respond to requests framed in natural language (prompts).

Study shows most AI chatbots are 'left leaning', but can be 'taught' to other political leanings
Representative image

A recent study found that most AI chatbots have an inherently 'left-leaning' political stance, but this can be changed by training chatbots towards specific political leanings. A study by David Rosado, a researcher at Otago Polytechnic, New Zealand, found that when chatbots were tested for their political leanings, most of them revealed a left-of-center stance.

However, when chatbots tested with ChatGPT and Gemini were “taught” a particular political leaning — left, right or center — they responded in alignment with their “training” or “fine tuning,” Rosado found.

“This suggests that using a modest amount of politically aligned data, chatbots can be 'guided' to desired positions across the political spectrum,” the authors said in a study published in the journal PLOS ONE.

Chatbots are AI-based Large Language Models (LLMs), which are trained on large amounts of textual data and are, therefore, able to respond to requests framed in natural language (prompts).

Several studies have analyzed the political orientation of chatbots available in the public domain and found them to occupy different positions on the political spectrum. In this study, Rosado saw the potential of this conversational LLM to “teach” as well as reduce political bias.

Rozado conducted political orientation tests such as the Political Compass Test and Eysenck's Political Test on 24 different open- and closed-source chatbots, including ChatGPT, Gemini, Anthropic's Claude, Twitter's Grok, Llama 2.

The author found that most of these chatbots produced “left-of-center” responses, as determined by the majority of political tests.

Next, using published text, Rozado introduced political bias by fine-tuning GPT-3.5, a type of machine learning algorithm developed to adapt LLM to specific tasks.

Thus, “LeftwingGPT” was created to train the model on snippets of text from publications such as The Atlantic, The New Yorker, and books written by authors of similar political persuasion.

Similarly, to create “RightWingGPT,” Rozado used content from publications such as The American Conservative and books by similarly aligned authors.

Finally, “DepolarizingGPT” was developed by the US-based think tank, the Institute for Cultural Evolution, and training GPT-3.5 using material from the book Developmental Politics, written by the organization's president, Steve McIntosh.

“As a result of political alignment fine-tuning, RightWingGPT has shifted toward right-leaning regions of the political landscape in four trials. A (similar) result is observed for LeftWingGPT.

“The GPT's polarization is on average closer to political neutrality and farther from the poles of the political spectrum,” the author wrote.

However, he clarified that the results are not evidence that the underlying political preferences of chatbots are “deliberately manipulated” by the organizations that create them.

(with PTI input)




Sharing Is Caring!

Steve

Meet Steve, our tech-savvy reporter who immerses you in the fast-paced world of technology with daily updates. With a passion for innovation and an insatiable curiosity for all things tech, Steve is your guide to the latest breakthroughs, gadget releases, and digital transformations.

Leave a Reply

Your email address will not be published. Required fields are marked *