Parents sue OpenAI over ChatGPT’s role in son’s suicide

Before sixteen-year-old Adam Raine died by suicide, he had spent months consulting ChatGPT about his plans to finish his life. Now, his parents are filing the primary known wrongful death lawsuit against OpenAI, the Latest York Times reports.

Many consumer-facing AI chatbots are programmed to activate safety features if a user expresses intent to harm themselves or others. But research has shown that these safeguards are removed from foolproof.

In Raine’s case, while using a paid version of ChatGPT-4o, the AI often encouraged him to hunt skilled help or contact a help line. Nonetheless, he was in a position to bypass these guardrails by telling ChatGPT that he was asking about methods of suicide for a fictional story he was writing.

OpenAI has addressed these shortcomings on its blog. “Because the world adapts to this recent technology, we feel a deep responsibility to assist those that need it most,” the post reads. “We’re constantly improving how our models respond in sensitive interactions.”

Still, the corporate acknowledged the restrictions of the present safety trainings for big models. “Our safeguards work more reliably in common, short exchanges,” the post continues. “We now have learned over time that these safeguards can sometimes be less reliable in long interactions: because the back-and-forth grows, parts of the model’s safety training may degrade.”

These issues usually are not unique to OpenAI. Character.AI, one other AI chatbot maker, can be facing a lawsuit over its role in a youngster’s suicide. LLM-powered chatbots have also been linked to cases of AI-related delusions, which existing safeguards have struggled to detect.

Techcrunch event

San Francisco
|
October 27-29, 2025

Related Post

Leave a Reply