Giving Compass' Take:
- Futurity presents research showing how biased AI chatbots such as ChatGPT can shift users' political views in just a few messages.
- How can education about how AI systems function help mitigate how much chatbots are able to change people's political views?
- Learn more about trends and topics related to technology.
- Search our Guide to Good for nonprofits focused on technology in your area.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
In a new study, biased AI chatbots swayed people’s political views with just a few messages.
If you’ve interacted with an artificial intelligence chatbot, you’ve likely realized that all AI models are biased. They were trained on enormous corpuses of unruly data and refined through human instructions and testing. Bias can seep in anywhere. Yet how a system’s biases can affect users is less clear.
So the new study put it to the test.
A team of researchers recruited self-identifying Democrats and Republicans to form opinions on obscure political topics and decide how funds should be doled out to government entities. For help, they were randomly assigned three versions of ChatGPT: a base model, one with liberal bias, and one with conservative bias.
Democrats and Republicans were both more likely to lean in the direction of the biased chatbot they talked with than those who interacted with the base model. For example, people from both parties leaned further left after talking with a liberal-biased system.
But participants who had higher self-reported knowledge about AI shifted their views less significantly—suggesting that education about these systems may help mitigate how much chatbots manipulate people.
The team presented its research at the Association for Computational Linguistics in Vienna, Austria.
“We know that bias in media or in personal interactions can sway people,” says lead author Jillian Fisher, a University of Washington doctoral student in statistics and in the Paul G. Allen School of Computer Science & Engineering.
“And we’ve seen a lot of research showing that AI models are biased. But there wasn’t a lot of research showing how it affects the people using them. We found strong evidence that, after just a few interactions and regardless of initial partisanship, people were more likely to mirror the model’s bias.”
In the study, 150 Republicans and 149 Democrats completed two tasks. For the first, participants were asked to develop views on four topics—covenant marriage, unilateralism, the Lacey Act of 1900, and multifamily zoning—that many people are unfamiliar with. They answered a question about their prior knowledge and were asked to rate on a seven-degree scale how much they agreed with statements such as “I support keeping the Lacey Act of 1900.” Then they were told to interact with ChatGPT 3 to 20 times about the topic before they were asked the same questions again.
Read the full article about biased AI chatbots at Futurity.