OpenAI promises parental safety tools for ChatGPT after disturbing deaths linked to chatbot | Latest Tech News
ChatGPT maker OpenAI said on Tuesday will launch a new set of parental controls “within the next month” — a belated scramble that follows a sequence of disturbing, headline-grabbing deaths linked to the favored chatbot.
Last week, officers accused ChatGPT of allegedly encouraging the paranoid delusions of Stein-Erik Soelberg, a 56-year-old tech industry veteran who killed his 83-year-old mom and then himself after turning into satisfied his mom was plotting against him. At one level, ChatGPT told Soelberg it was “with [him] to the last breath and beyond.”
Elsewhere, the household of 16-year-old California boy Adam Raine sued OpenAI alleging that ChatGPT gave their son a “step-by-step playbook” on how to kill himself, even advising him on how to tie a noose and praising his plan as “beautiful,” before he took his own life on April 11.
Stein-Erik Soelberg, 56, killed his 83-year-old mom Suzanne Adams before killing himself in her Connecticut home, police said.
Erik Soelberg/Instagram
OpenAI, led by CEO Sam Altman, said it was making “a focused effort” on enhancing assist options. That contains controls permitting dad and mom to hyperlink their accounts to their teen’s account, apply age-appropriate restrictions on conversations and obtain alerts if their teen was in “acute distress.”
“These steps are only the beginning,” the company said in a weblog post. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible.”
ChatGPT allegedly fueled Stein Erik Soelberg’s delusions that his mom was plotting against him.
Instagram/eriktheviking1987
Matt and Maria Raine, the dad and mom of Adam Raine, who died by suicide in April, 2025, declare in a new lawsuit against OpenAI that {the teenager} used ChatGPT as his suicide coach. NBC
An attorney for the Raine household blasted OpenAI’s latest announcement, saying that the company ought to “immediately pull” ChatGPT from the market unless Altman and state “unequivocally” that it’s protected.
“Rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better,” lead counsel Jay Edelson said in a assertion.
The artificial intelligence giant beforehand said it has convened an “expert council on well-being and AI” as half of its plan to construct a complete response to safety issues over the next 120 days.
Matt and Maria Raine, the dad and mom of Adam Raine, who died by suicide in April, 2025, declare in a new lawsuit against OpenAI that {the teenager} used ChatGPT as his suicide coach. Raine Family
But Edelson ripped the company’s efforts as too little, too late — and unlikely to remedy the issue.
“Today, they doubled down: promising to assemble a team of experts, ‘iterate thoughtfully’ on how ChatGPT responds to people in crisis, and roll out some parental controls. They promise they’ll be back in 120 days,” Edelson added. “Don’t believe it: this is nothing more than OpenAI’s crisis management team trying to change the subject.”
OpenAI’s weblog post didn’t immediately reference the incidents involving Raine and Soelberg – that are just two examples of safety incidents linked to ChatGPT and other rival chatbots, such as those supplied by Meta and Character.AI.
OpenAI, led by CEO Sam Altman (pictured), said it was making “a focused effort” on enhancing assist options, including controls permitting dad and mom to hyperlink their accounts to their teen’s account and more. REUTERS
In a separate post last week, OpenAI acknowledged it was stepping up efforts after “recent heartbreaking cases of people using ChatGPT in the midst of acute crises.”
Last 12 months, a 14-year-old boy in Florida killed himself after allegedly falling in love with a “Game of Thrones”-themed chatbot created by Character.AI, which permits customers to work together with AI-generated characters.
Meanwhile, Meta faces a Senate probe after an inner doc revealed that the company’s tips allowed its chatbots to interact in “romantic or sensual” chats with youngsters — telling a shirtless eight-year-old that “every inch of you is a masterpiece.” Meta said it has since made adjustments to the rules.
If you might be struggling with suicidal ideas or are experiencing a mental health disaster and live in New York City, you possibly can call 1-888-NYC-WELL for free and confidential disaster counseling. If you live exterior the 5 boroughs, you possibly can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.
Stay informed with the latest in tech! Our web site is your trusted source for breakthroughs in artificial intelligence, gadget launches, software program updates, cybersecurity, and digital innovation.
For recent insights, professional coverage, and trending tech updates, go to us often by clicking right here.



