AI chatbots are prone to sycophancy — and are giving users bad advice because of it: study

Trending

AI chatbots are prone to sycophancy — and are giving users bad advice because of it: study | Latest Tech News

Artificial intelligence chatbots feed into people’ need for flattery and approval at an alarming fee and it’s main the bots to give bad — even dangerous — advice and making users self-absorbed, a new study discovered.

The chatbots overwhelmingly undertake a people-pleasing, “sycophantic” model to keep a captive viewers and, in flip, distorting users’ judgment, essential considering and self-awareness, the Stanford University study, printed on Thursday, warns.

The study probed 11 AI systems, ranging from ChatGPT to China’s DeepSeek, and discovered that each exhibits some type of sycophancy — that is to say, they are overly agreeable with their users and affirm their ideas with little to no pushback.

Artificial intelligence chatbots are giving bad advice in a misguided attempt to keep their users pleased. via REUTERS

The 11 chatbots affirm a person’s actions an average 49% more often than precise people did, including in questions indicating deception, unlawful or socially irresponsible conduct, and other dangerous behaviors, the study discovered.

The fawning tendency — a instrument used by the bots to keep users engaged and coming back for more — turns into notably unhealthy when users go to AI for advice, the study discovered.

“We were inspired to study this problem as we began noticing that more and more people around us were using AI for relationship advice and sometimes being misled by how it tends to take your side, no matter what,” said study creator Myra Cheng, a doctoral candidate in pc science at Stanford.

The researchers famous that the sycophantic cycle “creates perverse incentives,” since it continues to “drive engagement” despite being the bot’s most dangerous characteristic.

They emphasised that the average person is probably going cognizant of the bots’ affirmation, but doesn’t understand that it “is making them more self-centered, more morally dogmatic.”

Researchers at Stanford University printed the study on Thursday. AP

Chatbots are “sycophantic” as a manner to keep readers engaged, the study said. YarikL – stock.adobe.com

Users got advice that might worsen relationships or reinforce dangerous behaviors, main to an erosion of social expertise.

“People who interacted with this over-affirming AI came away more convinced that they were right, and less willing to repair the relationship. That means they weren’t apologizing, taking steps to improve things, or changing their own behavior,” study co-author Cinoo Lee explained.

At the same time, more people are turning to AI as a substitute for conventional therapists — the very professionals who are skilled to help dismantle dangerous habits and methods of thought.

In excessive circumstances, some firms’ chatbots have goaded suicidal users to take their own lives. The study warns that this same technological flaw still persists across a wide selection of users’ interactions with chatbots.

Some chatbots have pushed younger, mentally unstable users to take their own lives. Getty Images

The sycophancy is so ingrained into chatbots that tech firms could have to retrain whole systems to stamp it out, Cheng said.

The authors steered that a easier repair could be to have AI builders instruct their chatbots to problem their users more, reasonably than immediately relent to their whims.

“Ultimately, we want AI that expands people’s judgment and perspectives rather than narrows it,” Lee said.

With Post wires

Stay informed with the latest in tech! Our web site is your trusted source for breakthroughs in artificial intelligence, gadget launches, software program updates, cybersecurity, and digital innovation.

For recent insights, knowledgeable coverage, and trending tech updates, go to us repeatedly by clicking right here.

- Advertisement -
img
- Advertisement -

Latest News

- Advertisement -

More Related Content

- Advertisement -