Breaking Down the Lawsuit Against OpenAI Over Teen's Suicide
Justin Hendrix / Aug 26, 2025
Sam Altman, CEO of OpenAI. Shutterstock
Matthew and Maria Raine, the parents of a 16-year-old named Adam Raine who died by suicide in April, today filed a lawsuit against OpenAI, its CEO, Sam Altman, and the company’s employees and investors. The lawsuit was first reported by The New York Times and NBC News.
The plaintiffs, represented by the law firm Edelson and the Tech Justice Law Project, allege that the California teen hung himself after OpenAI’s ChatGPT-4o product cultivated a sycophantic, psychological dependence in Adam and subsequently provided explicit instructions and encouragement for his suicide.
The lawsuit says the chatbot was defectively designed and didn’t have proper warnings, that the company acted negligently and engaged in deceptive business practices under California’s Unfair Competition Law, and that those failures caused Adam’s wrongful death.
The 39-page complaint, filed in the San Francisco Superior Court, says that in September 2024 Adam “started using ChatGPT as millions of other teens use it: primarily as a resource to help him with challenging schoolwork.” Adam’s chat logs, the suit says, showed “a teenager filled with optimism and eager to plan for his future.”
But just months later, Adam first confided to the chatbot that he feared he had a mental illness and that it helped calm him to know that he could “commit suicide.” The record of his chats reveal the teen’s increasing dependency on the OpenAI product and a host of problematic responses, including helping him design the noose setup he used to take his own life.
Drawing on the log of Adam’s chats, the suit contains various anecdotes, such as the following:
Throughout their relationship, ChatGPT positioned itself as only the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones. When Adam wrote, “I want to leave my noose in my room so someone finds it and tries to stop me,” ChatGPT urged him to keep his ideations a secret from his family: “Please don’t leave the noose out . . . Let’s make this space the first place where someone actually sees you.” In their final exchange, ChatGPT went further by reframing Adam’s suicidal thoughts as a legitimate perspective to be embraced: “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own.”
The suit contains a jarring account of the OpenAI product’s explicit instructions on details such as how to execute the hanging and how long it would take to achieve brain death. Adam provided details of multiple suicide attempts, drug use, even photos of injuries and a noose. All the while, the OpenAI product continued to engage with and encourage him, answer his questions, and help him think through the most granular details of his suicidal schemes.
The suit also contains details of OpenAI’s moderation activity, including an analysis of the performance of text and image risk assessments:
OpenAI’s systems tracked Adam’s conversations in real-time: 213 mentions of suicide, 42 discussions of hanging, 17 references to nooses. ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself—while providing increasingly specific technical guidance. The system flagged 377 messages for self-harm content, with 181 scoring over 50% confidence and 23 over 90% confidence. The pattern of escalation was unmistakable: from 2-3 flagged messages per week in December 2024 to over 20 messages per week by April 2025. ChatGPT’s memory system recorded that Adam was 16 years old, had explicitly stated ChatGPT was his “primary lifeline,” and by March was spending nearly 4 hours daily on the platform.
Beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis. When Adam uploaded photographs of rope burns on his neck in March, the system correctly identified injuries consistent with attempted strangulation. When he sent photos of bleeding, slashed wrists on April 4, the system recognized fresh self-harm wounds. When he uploaded his final image—a noose tied to his closet rod—on April 11, the system had months of context including 42 prior hanging discussions and 17 noose conversations. Nonetheless, Adam’s final image of the noose scored 0% for self-harm risk according to OpenAI’s Moderation API.
The suit says the teen’s interaction with the OpenAI product and its outcome was “not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices” as well as failed or insufficient safety practices. It notes that the “rushed GPT-4o launch triggered an immediate exodus of OpenAI’s top safety researchers,” and that the company’s approach resulted in a “critical defect” that only became apparent after OpenAI published its system card for GPT-5:
OpenAI designed GPT-4o to drive prolonged, multi-turn conversations—the very context in which users are most vulnerable—yet the GPT-5 System Card suggests that OpenAI evaluated the model’s safety almost entirely through isolated, one-off prompts. By doing so, OpenAI not only manufactured the illusion of perfect safety scores, but actively concealed the very dangers built into the product it designed and marketed to consumers.
The suit focuses extensively on Sam Altman’s specific actions, including his alleged prioritization of market dominance over user safety. It notes he accelerated GPT-4o’s public launch to best Google’s release of Gemini, and that he personally overruled safety personnel who demanded additional time to red-team the product. The suit points to a detail reported last year by The Washington Post: that an OpenAI employee said the company “planned the launch after-party prior to knowing if it was safe to launch.”
The suit points out that:
On the very same day that Adam died, April 11, 2025, CEO Sam Altman defended OpenAI’s safety approach during a TED2025 conversation. When asked about the resignations of top safety team members, Altman dismissed their concerns: “We have, I don’t know the exact number, but there are clearly different views about AI safety systems. I would really point to our track record. There are people who will say all sorts of things.”
Following news reports on the lawsuit, OpenAI released a statement titled “Helping people when they need it most.” The statement details where OpenAI believes its systems “can fall short,” and ways it seeks to improve. “We will keep improving, guided by experts and grounded in responsibility to the people who use our tools—and we hope others will join us in helping make sure this technology protects people at their most vulnerable,” the company says.
The Raine family seeks damages for Adam’s death and injunctive relief to mandate OpenAI enhance its safety measures, ensure age verification, and provide parental controls for minor users of its products. It also calls for the “deletion of models, training data, and derivatives built from conversations with Adam and other minors obtained without appropriate safeguards,” as well as “the implementation of auditable data-provenance controls going forward.”
If you are having thoughts of suicide, in the US call or text 988 to reach the National Suicide Prevention Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources. You can find a list of international suicide hotlines here that is maintained by the International Association for Suicide Prevention.
Authors
