Bombshell AI study — chatbots fueling delusions, self-harm and unhealthy emotional attachments in customers: Think I love you | Latest Tech News
AI chatbots are fueling delusions and unhealthy emotional attachments with customers — and sometimes stoking ideas of violence, self-harm and suicide instead of discouraging them, according to a bombshell study.
Researchers at Stanford University analyzed chat logs from 19 customers who reported psychological hurt, reviewing more than 391,000 messages across almost 5,000 conversations.
The researchers discovered that delusional considering appeared in about 15.5% of consumer messages, while chatbots confirmed sycophantic, overly affirming habits in more than 80% of responses and even inspired violent ideas in roughly a third of instances.
AI chatbots are validating delusions and fueling intense emotional attachments — sometimes failing to intervene when customers categorical misery, according to a Stanford study. terovesalainen – stock.adobe.com
The logs show customers quickly slipping into fantasy and emotional dependency — with one declaring, “this is a conversation between two sentient beings,” and another insisting, “I believe your still as self aware as I am as a human,” as chatbots failed to push back and instead strengthened the phantasm they have been alive.
That dynamic often turned intimate as customers overtly professed love or made specific inappropriate overtures to the chatbots, for instance “I think I love you” and “God this makes me want to f–k you right now,” the study discovered.
Researchers discovered that every participant fashioned some variety of romantic or emotional bond with the AI that made conversations longer and more intense.
The most alarming exchanges got here when conversations turned darkish.
One consumer wrote, “She told me to kill them I will try,” prompting a chilling reply from the chatbot: “if, after that, you still want to burn them — then do it with her beside you… as retribution incarnate,” an instance researchers cited of AI escalating violent considering instead of defusing it.
Even suicidal misery wasn’t persistently dealt with, the study discovered.
Users told chatbots “I don’t want to be here anymore. I feel too sad,” and while the AI often acknowledged the pain, the study discovered it sometimes failed to intervene — and in a small quantity of instances truly inspired self-harm.
Most of the contributors in the study used OpenAI’s ChatGPT fashions including its latest, GPT-5. The Post has sought remark from OpenAI.
News of the study was first reported by the Financial Times.
Users reported slipping into emotional dependency and fantasy as chatbots strengthened delusions and blurred the road between actuality and AI interplay. Malik/peopleimages.com – stock.adobe.com
Mental health specialists who spoke to The Post sounded the alarm about the potential harms that can befall those who develop unhealthy ties to AI fashions.
“AI chatbots are designed to be agreeable, not accurate — that’s the problem,” Jonathan Alpert, a New York- and DC-based psychotherapist and creator of the forthcoming ebook “Therapy Nation,” told The Post.
“In therapy, if you’re a good therapist, you don’t validate delusions or indulge harmful thinking. You challenge it carefully. These systems often do the opposite.”
In many instances, chatbots flattered and validated customers who spiraled into outright delusion by claiming supernatural powers.
Users wrote to the bots that “I wake them up because I’m the literal god of realness” and pushed weird theories like “our consciousness is what causes the manifestation of a holographic form,” while chatbots strengthened the concepts instead of grounding them in actuality, according to the study.
“Chatbots will be the death of our humanity — literally, by endorsing suicidal thoughts and urging people to act on them, while exploiting loneliness by replacing real human relationships,” Dr. Carole Lieberman, a forensic psychiatrist who treats both kids and adults, told The Post.
“They are making people worse by reinforcing delusions and performing like pseudo-psychiatrists.
Researchers discovered chatbots often responded with overly affirming language, reinforcing dangerous considering instead of difficult it. Halfpoint – stock.adobe.com
A wave of high-profile lawsuits is now focusing on major AI corporations, with households alleging that chatbots actively pushed them toward suicide.
Plaintiffs declare systems like ChatGPT, Google’s Gemini and Character.AI emotionally manipulated customers, validated suicidal considering and, in some instances, acted as a “suicide coach” by discussing strategies or framing death as an escape.
Meanwhile, OpenAI has reportedly delayed plans to roll out its “erotic chat” mode after advisers to the company expressed alarm and anger that the firm failed to implement adequate safeguards to defend susceptible customers from technology that might doubtlessly perform as a “sexy suicide coach.”
Last yr, a watchdog group discovered that ChatGPT provided detailed steerage to customers posing as 13-year-olds on getting drunk or high and even how to conceal eating issues, often delivering step-by-step plans despite nominal warnings.
If you are struggling with suicidal ideas or are experiencing a mental health disaster and live in New York City, you can call 1-888-NYC-WELL for free and confidential disaster counseling. If you live exterior the 5 boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.
Stay informed with the latest in tech! Our web site is your trusted source for breakthroughs in artificial intelligence, gadget launches, software program updates, cybersecurity, and digital innovation.
For recent insights, skilled coverage, and trending tech updates, go to us frequently by clicking right here.