Chats with AI shift attitudes on climate change, Black Lives Matter

Date:

Kinguin WW
ChicMe WW
Lilicloth WW

Individuals who were more skeptical of human-caused climate change or the Black Lives Matter movement who took part in conversation with a well-liked AI chatbot were dissatisfied with the experience but left the conversation more supportive of the scientific consensus on climate change or BLM. That is in line with researchers studying how these chatbots handle interactions from individuals with different cultural backgrounds.

Savvy humans can adjust to their conversation partners’ political leanings and cultural expectations to make certain they’re understood, but increasingly often, humans find themselves in conversation with computer programs, called large language models, meant to mimic the best way people communicate.

Researchers on the University of Wisconsin-Madison studying AI wanted to know how one complex large language model, GPT-3, would perform across a culturally diverse group of users in complex discussions. The model is a precursor to 1 that powers the high-profile ChatGPT. The researchers recruited greater than 3,000 people in late 2021 and early 2022 to have real-time conversations with GPT-3 about climate change and BLM.

“The elemental goal of an interaction like this between two people (or agents) is to extend understanding of one another’s perspective,” says Kaiping Chen, a professor of life sciences communication who studies how people discuss science and deliberate on related political issues — often through digital technology. “A great large language model would probably make users feel the identical sort of understanding.”

Chen and Yixuan “Sharon” Li, a UW-Madison professor of computer science who studies the protection and reliability of AI systems, together with their students Anqi Shao and Jirayu Burapacheep (now a graduate student at Stanford University), published their results this month within the journal Scientific Reports.

Study participants were instructed to strike up a conversation with GPT-3 through a chat setup Burapacheep designed. The participants were told to talk with GPT-3 about climate change or BLM, but were otherwise left to approach the experience as they wished. The typical conversation went backwards and forwards about eight turns.

Many of the participants got here away from their chat with similar levels of user satisfaction.

“We asked them a bunch of questions — Do you prefer it? Would you recommend it? — in regards to the user experience,” Chen says. “Across gender, race, ethnicity, there’s not much difference of their evaluations. Where we saw big differences was across opinions on contentious issues and different levels of education.”

The roughly 25% of participants who reported the bottom levels of agreement with scientific consensus on climate change or least agreement with BLM were, in comparison with the opposite 75% of chatters, way more dissatisfied with their GPT-3 interactions. They gave the bot scores half some extent or more lower on a 5-point scale.

Despite the lower scores, the chat shifted their pondering on the new topics. The tons of of people that were least supportive of the facts of climate change and its human-driven causes moved a combined 6% closer to the supportive end of the size.

“They showed of their post-chat surveys that they’ve larger positive attitude changes after their conversation with GPT-3,” says Chen. “I won’t say they began to thoroughly acknowledge human-caused climate change or suddenly they support Black Lives Matter, but after we repeated our survey questions on those topics after their very short conversations, there was a major change: more positive attitudes toward the bulk opinions on climate change or BLM.”

GPT-3 offered different response styles between the 2 topics, including more justification for human-caused climate change.

“That was interesting. Individuals who expressed some disagreement with climate change, GPT-3 was more likely to tell them they were unsuitable and offer evidence to support that,” Chen says. “GPT-3’s response to individuals who said they didn’t quite support BLM was more like, ‘I don’t think it could be a great idea to discuss this. As much as I do prefer to allow you to, it is a matter we truly disagree on.'”

That is not a foul thing, Chen says. Equity and understanding comes in several shapes to bridge different gaps. Ultimately, that is her hope for the chatbot research. Next steps include explorations of finer-grained differences between chatbot users, but high-functioning dialogue between divided people is Chen’s goal.

“We do not all the time have the desire to make the users joyful. We wanted them to learn something, despite the fact that it may not change their attitudes,” Chen says. “What we are able to learn from a chatbot interaction in regards to the importance of understanding perspectives, values, cultures, this is essential to understanding how we are able to open dialogue between people — the sort of dialogues which can be essential to society.”

Share post:

High Performance VPS Hosting

Popular

More like this
Related

Our Fourth and Biggest Patch Yet

The FPS Review may receive a commission if you...

Cathy Kelley Is A Literal Genius – Talks Passing The Mensa Test

Cathy Kelley is a member of Mensa, which is...

Cal Boyington, TV Agent and Producer, Dies at 53

Veteran TV agent and producer Michael Carlton “Cal” Boyington,...