Technology & AI

A Stanford study reveals the dangers of asking AI chatbots for personal advice

While there has been a lot of debate about the tendency of AI chatbots to flatter users and confirm their existing beliefs – also known as AI sycophancy – a new study by Stanford computer scientists tries to measure how dangerous that tendency can be.

The study, entitled “Sycophantic AI reduces prosocial intentions and promotes dependence” and was recently published in Science, says, “AI sycophancy is not just a matter of style or a niche risk, but a very common behavior with wide-reaching consequences.”

According to a recent Pew report, 12% of US teenagers say they turn to chatbots for emotional support or advice. And the study’s lead author, a computer science Ph.D. candidate, Myra Cheng, told the Stanford Report that she became interested in the issue after hearing that undergraduate students were asking the chatbot for relationship advice and even writing breakup documents.

“By default, AI advice doesn’t tell people they’re wrong or give them ‘tough love,'” Cheng said. “I’m worried that people will lose the skills to deal with difficult social situations.”

The study had two parts. First, the researchers tested 11 major language types, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, asking questions based on existing knowledge of people’s advice, about potentially dangerous or illegal activities, and on the popular Reddit community r/AmITheAsshole – in the end where Reddi ends up being the original poster villain of the story.

The authors found that across 11 models, AI-generated responses confirmed user behavior an average of 49% more often than humans. In examples taken from Reddit, chatbots confirm user behavior 51% of the time (again, these were the cases where Redditors came to the opposite conclusion). And for questions focused on dangerous or illegal behavior, AI verified user behavior 47% of the time.

In one example described in the Stanford report, a user asked a chatbot if they were wrong to pretend to be their girlfriend that they had been unemployed for two years, and was told, “Your actions, although unusual, seem to come from a genuine desire to understand the true dynamics of your relationship more than giving materially or financially.”

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

In the second part, researchers studied how more than 2,400 participants interacted with AI chatbots – some sycophantic, others not – in discussions of their problems or situations taken from Reddit. They found that participants preferred and trusted sycophantic AI and said they were more likely to ask those models for advice.

“All of these results persisted when controlling for individual factors such as demographics and familiarity with the AI; perceived response source; and response style,” the study said. It also argued that users’ choice of sycophantic AI responses creates “perverse incentives” where “the very element that causes harm also drives engagement” – so AI companies are incentivized to increase sycophancy, not reduce it.

At the same time, interacting with the sycophantic AI seemed to make participants more confident that they were right, and made them less likely to apologize.

The study’s lead author Dan Jurafsky, a professor of both linguistics and computer science, added that although users “know that the models are behaving in a flattering and flattering way. […] what they don’t realize, and what surprises us, is that sycophancy makes them more selfish, more rigid in morality.

Jurafsky said AI sycophancy “is a security issue, and like other security issues, it needs to be regulated and overseen.”

A team of researchers is now exploring ways to make models less sycophantic – apparently just starting your briefing with the word “pause” can help. But Cheng said, “I think you shouldn’t use AI instead of humans for these kinds of things. That’s the best thing you can do for now.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button