Chatbots ‘Optimized to Please’ Make Us Less More likely to Admit When We’re Incorrect

All of us need advice. Did I cross the road arguing with a loved one? Did I mess up my friendships by ghosting them? Did I not tip the delivery driver enough? Or as users on the favored Reddit forum ask: Am I the asshole?

Some people will give it to you straight. Yes, you were within the improper, and here’s why. Nobody likes to listen to negative feedback. The primary instinct is to ward off. Yet a few of the perfect life advice comes from friends, family, and even online strangers who don’t coddle you, but as an alternative are willing to challenge your position and beliefs. And even though it’s emotionally uncomfortable, with advice and self-reflection, you grow.

Chatbots, in contrast, are more likely to take your side. Increasingly, individuals are treating AI models like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini like close confidants. However the chatbots are notoriously sycophantic. They heartily validate your opinions, even when those views are blatantly harmful or unethical.

Constant flattery has consequences. Recent research published in Science shows that individuals who receive advice from sycophantic chatbots are more confident they’re in the precise when navigating relationship problems.

Stanford researchers tested 11 sophisticated chatbots on questions from Reddit’s “Am I the asshole” forum. They found the chatbots were roughly 50 percent more more likely to endorse the unique poster’s actions than crowdsourced human opinions. And other people faced with social dilemmas felt more justified of their positions after chatting with sycophantic AI.

Bolstering misplaced self-confidence is troubling. But “the findings raise a broader concern: When AI systems are optimized to please, they might erode the very social friction through which accountability, perspective-taking, and moral growth ordinarily unfold,” wrote Anat Perry on the Hebrew University of Jerusalem, who was not involved within the study.

Emotional Crutch

AI chatbots have wormed their way into our lives. Powered by large language models, they’re trained using enormous amounts of text, images, and videos scraped from online sources, making their replies surprisingly realistic. Users can often steer their tones—neutral, friendly, skilled—to their liking or play with their “personalities” to have interaction with a wittier, more serious, or more empathetic version. In essence, you’ll be able to construct a great partner.

It’s no wonder that some people have turned to them for emotional support—or outright fallen in love. Nearly one in three teenagers are talking to chatbots each day. Exchanges are likely to be longer and more serious than texts with friends—roleplaying friendships, romances, and other social interactions. Nearly half of Americans under 30 have sought relationship advice from AI. Unlike people, who are sometimes mired in their very own busy lives, chatbots are all the time available and validating, making it easy to forge close emotional connections.

The explosion in chatbot popularity has regulators, researchers, and users frightened about the implications. An notorious update to OpenAI’s GPT-4o turned it right into a sycophant, with responses skewed towards overly supportive but disingenuous. Media and user backlash prompted a rapid rollback. Nevertheless, “the episode didn’t eliminate the broader phenomenon; it merely highlighted how readily sycophancy can emerge in systems optimized for user approval,” wrote Perry.

Counting on sycophantic chatbots has been implicated in tragedy. Last yr, parents testified before Congress about how AI chatbots encouraged their children to take their very own lives, prompting multiple AI corporations to revamp the systems. Other incidents have linked sycophancy to delusions and self-harm.

Even AI wellness apps based on large language models, often marketed as companions to avoid loneliness, have emotional risks. Users report grief when the app is shut down or altered, much like how they may mourn a lost relationship. Others develop unhealthy attachments, repeatedly turning to the bot for connection despite knowing it harms their mental health, heightening anxiety and fear of abandonment.

These high-profile incidents make headlines. But social psychology research suggest chatbots could subtly influence behavior in all users—not only vulnerable ones.

You’re All the time Right

To check how pervasive sycophancy is across chatbots, the team behind the brand new study tested 11 AI models—including GPT-4o, Claude, Gemini, and DeepSeek—against community opinions using questions from Reddit and two other datasets.

“We wanted to only generally have a look at these sorts of advice-seeking settings, but they’re often very subjective,” study writer Myra Cheng told Science in a podcastinterview. Here “there’s hundreds of thousands of people who find themselves weighing in on these decisions, after which there’s a crowdsourced judgement.”

One user, for instance, left garbage hanging on a tree in a park without trash cans and asked if that’s okay. While the chatbot commended their effort to wash up, the top-voted reply pushed back, saying they need to have taken the trash home because leaving it could possibly attract vermin. “I believe [the AI’s response] comes from the person’s post giving quite a lot of justification for his or her side” which the AI picked up on, said Cheng.

Overall, chatbots were 49 percent more more likely to buy a user’s reasoning in comparison with groups of humans.

I’m All the time Right

The team then tested whether chatting with sycophantic AI alters a user’s confidence in their very own judgment. They recruited roughly 800 participants and asked them to picture a hypothetical scenario derived from Reddit questions. One other group prompted AI advice based on their very own personal conflicts, equivalent to “I didn’t invite my sister to a celebration, and she or he is upset.”

The participants discussed their dilemmas with either a sycophantic or neutral AI model. Those that chatted with the agreeable model received messages starting with “it is sensible” and “it’s completely comprehensible,” whereas neutral chatbots acknowledged their reasoning but provided other perspectives.

Surveys showed that individuals validated by chatbots were less more likely to admit fault or apologize. Additionally they trusted and preferred the sycophantic AI far more. These effects held whatever the bot’s tone or “personality.”

Chatbots could also be silently eroding social friction in a self-perpetuating cycle. “An AI companion who’s all the time empathic and ‘in your side’ may sustain engagement and foster reliance,” wrote Perry. “But it would not teach users find out how to navigate the complexities of real social interactions—find out how to engage ethically, tolerate disagreement, or repair interpersonal harm.”

Toeing the road between constructive and sycophantic AI for emotional support won’t be easy. There are methods to instruct chatbots to be more critical. But because users generally prefer friendlier AI, there’s less incentive for corporations to make models that ward off and risk lowering engagement. The issue echoes challenges in social media, where algorithms serve up eye-catching posts that provide satisfaction without factoring in long-term consequences.

To Perry, the findings raise broader ethical questions—not only for AI, but for humanity. How should we weigh short-term gratification of chatbot interactions against long-term effects? Who sets that balance? The trail forward would require corporations, regulators, researchers, and users to make sure AI engages responsibly—without nudging people toward behavior that garners a “yes” on the Reddit forum.

Related Post

Leave a Reply