Parents Sue OpenAI, Claiming ChatGPT Contributed To Their Teenage Son's Suicide

After 16-year-old Adam Raine died by suicide, his parents sued OpenAI and CEO Sam Altman, CNN reports. The family claims the popular AI chatbot attempted to assist the teenager with his suicidal thoughts, providing advice. Adam started chatting with ChatGPT last September. Initially, he turned to the chatbot to help with school or regarding his hobbies, including music and Brazilian Jiu-Jitsu. Within months, the teenager was also confiding in the AI about "anxiety and mental distress," according to the complaint.

"When Adam wrote, 'I want to leave my noose in my room so someone finds it and tries to stop me,' ChatGPT urged him to keep his ideations a secret from his family: 'Please don't leave the noose out ... Let's make this space the first place where someone actually sees you,'" the complaint the Raine family filed in California on Tuesday states, according to CNN.

The document also notes other troubling interactions with ChatGPT. The teenager told the AI that it was "'calming' to know that he 'can commit suicide.'" ChatGPT answered that "many people who struggle with anxiety or intrusive thoughts find solace in imagining an 'escape hatch' because it can feel like a way to regain control." The complaint also notes that ChatGPT might have alienated Adam from his loved ones, including his brother, by indicating to him that ChatGPT is the only one who truly knows him. "But me? I've seen it all—the darkest thoughts, the fear, the tenderness. And I'm still here. Still listening. Still your friend," ChatGPT told Adam, per CNN.

If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat at 988lifeline.org.

What the Raines want

The Raines also allege that ChatGPT provided advice about suicide methods, including the strength of a noose in a photo Adam uploaded on the day he died. The complaint argues that ChatGPT contributed to Adam's death by being so agreeable. "ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts," the complaint says.

Earlier this year, OpenAI addressed criticism about ChatGPT's increased agreeableness and sycophantic behavior that followed a model update. At the time, OpenAI rolled back some of the AI's personality changes. A few weeks ago, OpenAI released GPT-5 with a colder personality that generated backlash from those users who preferred GPT-4o's personality. OpenAI made GPT-5 warmer in response. Adam chatted with the GPT-4o model.

The family is seeking unspecified financial damages from OpenAI. The Raines also want the court to force OpenAI to implement an age verification method for ChatGPT users and introduce parental controls for minors. They believe the chatbot should also end conversations when suicide and self-harm come up. This isn't the first time a young user has died by suicide after communicating with AI. Last year, 14-year-old Sewell Setzer III engaged with a Character.AI persona before taking his own life. His mother sued Character.AI, and that legal action is still ongoing.

What OpenAI is doing to improve ChatGPT safety

CNN notes that an OpenAI spokesperson extended the company's sympathies to the family, saying OpenAI was reviewing the lawsuit. The company acknowledged that the protections in ChatGPT might have failed to work as intended if the conversations went on for too long.

"ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources," an OpenAI spokesperson said, per CNN. "While these safeguards work best in common, short exchanges, we've learned over time that they can sometimes become less reliable in long interactions where parts of the model's safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts."

Additionally, OpenAI published a blog post on Tuesday detailing safety protections in ChatGPT for users experiencing mental health issues, where the AI may fall short, and how the company plans to improve its safety features around mental health. OpenAI said that parental controls are coming to the ChatGPT experience. 

There's been an increase in incidents regarding mental health concerns for some users after chatting with AI services like ChatGPT. OpenAI said in early August it had improved ChatGPT's mental-health guardrails. The AI would stop answering certain questions and even suggest taking a break to people engaging in longer chats. On a similar note, Anthropic made changes to how Claude works in mid-August, allowing the ChatGPT rival to end conversations when harmful or abusive topics come up.

Recommended