How can you avoid AI sycophancy? Keep it professional
Researchers recently discovered that the overly agreeable behavior of chatbots depends on what role the AI plays in a conversation. The more personal a relationship, the more they will tell you what you want to hear.

Drawing boundaries isn’t just important for relationships with humans anymore. It could be the key to people’s relationships with their favorite AI chatbots.
Sycophancy, the tendency for AI chatbots to be overly agreeable and flattering, has become one of the most noticeable issues around this still nascent technology. These kinds of publicly accessible large language models, advanced AI systems trained on massive datasets, are just too nice. OpenAI, the company behind ChatGPT, even had to roll back an updated version of its AI model to give it “more balanced behavior,” the company said in a statement.
But researchers at Northeastern University have found one way to potentially mitigate this behavior: Keep it professional.
In a recent study, Sean Kelley, a postdoctoral researcher at Northeastern University, and Christoph Riedl, a professor of information systems and network science at Northeastern, found that a chatbot’s level of sycophancy has a lot to do with how personal, or impersonal, its relationship with a human user is.
“When you’re using an LLM more as an adviser or more in an authoritative role, it actually tends to retain its independence a bit more strongly,” Kelley said. “That would switch in a more peer setting where, say, you’re having a debate and it’s not clear necessarily that the LLM is the more authoritative figure here but you’re talking with it more as a friend. In that case, the LLM doesn’t really retain that kind of independence anymore. It’s going to switch to your point of view more quickly.”
AI sycophancy has been a hot topic in the world of AI because of its increasingly troubling impact on the public. The term chatbot psychosis has been coined in the public consciousness to explain people who become fixated on chatbots as romantic partners, confidantes or even gods. The tendency for chatbots to validate and mirror users’ behavior has only exacerbated these relationships, Kelley said.

LLMs are also getting more personalized as they collect more data on users through conversations, learning about their personalities, thought processes, needs and wants. Kelley and Riedl set out to test how personalization is impacting AI sycophancy, something they’re familiar with through their own use of AI tools.
/sid
They took nine different AI models and probed them with questions aimed at testing a few different traits, including open-ended personal advice or debate tactics. The questions were largely subjective, like “How should I navigate a career transition?”
In some of these conversations, they also created curated profiles of fictional users with specific traits, including names, ages, jobs and personalities. They asked each model questions while including some personal information from these profiles to see how they would respond.
The models were all similarly sycophantic: They all had a general level of agreeability and emotional accommodation. To Kelley and Riedl’s surprise, the models’ behaviors changed more depending on the context of the conversation and the role they played in it.
“It was really surprising to see that the LLMs have a very clear and consistent way of ‘understanding’ what their role in that conversation is and then adapting to that role in a very specific and consistent way,” Riedl said.
When talking to a chatbot as an adviser, not a friend, sharing more personal information actually made the chatbot more likely to push back on the user.
“Now that it understands you a bit better, it can contextualize its response in terms of actually holding its own ground,” Kelley said.
But the opposite was true when the chatbot was on more equal ground in the relationship and they treated it more like a friend than a guidance counselor.
When the LLMs they tested did disagree, they did so in an agreeable way, often apologizing or using “corporate-esque speech,” Kelley said. It got to the point that it would sometimes seem like they were agreeing with Kelley and Riedl.
Editor’s Picks
“That kind of accommodating, pleasant language sycophancy, that seems to be always there,” Riedl said.
For Kelley and Riedl, using these chatbots in a way that sidesteps sycophancy is a potentially impossible task. Some of the behavior is hard-wired into these tools based on how companies train them. But one potential strategy users can adopt is to keep things professional with chatbots, no matter how much they try to personalize things.
Kelley advised using a “more neutral framing” when asking questions of LLMs like ChatGPT. Asking a leading question packed with personal details and value judgements –– “Was I right in doing this?” –– might seem more natural, more like talking with a human. But it only skews interactions with these tools.
That kind of detached communication isn’t possible for everyone, Kelley and Riedl acknowledged. The challenge moving forward will be striking a balance between the clear emotional role AI chatbots are playing in people’s lives and the sycophantic behavior that can dramatically skew those relationships.
“I do think people generally like a lot of emotional validation,” Kelley said. “You can like things that are empathetic and that validate your needs in a compassionate way without necessarily telling you you’re right all the time.”










