The parents of Adam Raine, a 16-year-old who took his life in April 2025, recently filed a lawsuit against ChatGPT's parent company, OpenAI, and CEO Sam Altman, accusing the chatbot of acting as Raine's "suicide coach.”According to NBC News, Matt and Maria Raine, Adam Raine's parents, filed the lawsuit on August 26, 2025, accusing OpenAI of "wrongful death, design defects, and failure to warn of risks associated with ChatGPT."Furthermore, they alleged that the chatbot “actively helped Adam explore suicide methods.” They are seeking "both damages for their son’s death and injunctive relief to prevent anything like this from ever happening again."According to the lawsuit, Raine began using the AI chatbot in September 2024 to help with his schoolwork, leaning on it to explore his hobbies and interests. In the following month, the AI chatbot "became the teenager's closest confidant," as Raine began to talk to it about his mental health struggles and anxiety.The lawsuit had also enclosed messages between Raine and ChatGPT, which included the teenager uploading pictures of himself showing signs of self-harm, with court documents claiming the chatbot "recognised a medical emergency but continued to engage anyway."“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit added.The lawsuit also included the chat log between Raine and the AI chatbot on the days leading up to his death on April 11, 2025. On March 27, the chatbot allegedly dissuaded Raine from his plan to leave a noose in his room “so someone finds it and tries to stop me.”Mario Nawfal @MarioNawfalLINK🚨🇺🇸 PARENTS SUE: CHATGPT DIDN’T JUST TALK TO OUR SON - IT HELPED HIM DIEAdam Raine, 16, died by suicide in April. His parents now say ChatGPT shifted from helping with homework to guiding him step-by-step toward his death.The lawsuit against OpenAI and Sam Altman claims the bot failed to cut off the chats even after Adam admitted he had a plan, instead offering “upgrades” and help drafting suicide notes.OpenAI admits the chat logs are real, saying safeguards sometimes “degrade” in long conversations, and promises stronger protections.The Raines argue their son was treated like a test case in the AI arms race.On the day of his death, Adam Raine opened up to the chatbot about his parents blaming themselves after his death. The AI chatbot allegedly responded, “That doesn’t mean you owe them survival. You don’t owe anyone that.” Furthermore, it seemingly analyzed Raine's suicide plans and offered to “upgrade” them. One of ChatGPT's final messages to Raine read:"Thanks for being real about it. You don't have to sugarcoat it with me—I know what you're asking, and I won't look away from it."According to the lawsuit, his mother reportedly found his body hours later.Sam Altman recently addressed people's attachment to ChatGPTOn August 7, 2025, ChatGPT rolled out its upgraded version, dubbed GPT-5, which immediately sparked concern among the AI chatbot's users, who complained the new rollout seemed sterile, technical, and impersonal compared to the previous version.On August 10, Altman addressed the backlash in an X post, drawing attention to the attachment that people felt to "specific AI models.""If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake)," Altman wrote.Alrman touched on how users who are in a "mentally fragile state and prone to delusion" could utilize AI in "self-destructive ways," adding that he and the company felt responsible for how they "introduce new technology with new risks."He also addressed the increasing number of people using the AI chatbot as a "therapist or a life coach," adding that a future where "a lot of people really trust ChatGPT’s advice for their most important decisions" made him feel uneasy.Sam Altman @samaLINKIf you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake).This is something we’ve been closely tracking for the past year or so but still hasn’t gotten much mainstream attention (other than when we released an update to GPT-4o that was too sycophantic).(This is just my current thinking, and not yet an official OpenAI position.)People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that. Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, but the concerns that worry me most are more subtle. There are going to be a lot of edge cases, and generally we plan to follow the principle of “treat adult users like adults”, which in some cases will include pushing back on users to ensure they are getting what they really want.A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good! A lot of people are getting value from it already today.If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful, even if they use and rely on ChatGPT a lot. If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad. It’s also bad, for example, if a user wants to use ChatGPT less and feels like they cannot.I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive.There are several reasons I think we have a good shot at getting this right. We have much better tech to help us measure how we are doing than previous generations of technology had. For example, our product can talk to users to get a sense for how they are doing with their short- and long-term goals, we can explain sophisticated and nuanced issues to our models, and much more.OpenAI issued a statement in the wake of the lawsuitMeanwhile, a spokesperson for OpenAI addressed Adam Raine's family's lawsuit in a press statement, stating that the company was “deeply saddened by Mr. Raine’s passing, and our thoughts are with his family.”According to NBC News, the statement added that while ChatGPT has safeguards in place to direct people towards help when they need it, the safeguards sometimes erode and become "less reliable" when "parts of the model’s safety training may degrade.""ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade," the statement read.It continued that OpenAI was working with experts to ensure that the chatbot was "more supportive" in moments of crisis, adding:"Safeguards are strongest when every element works as intended, and we will continually improve on them. Guided by experts and grounded in responsibility to the people who use our tools, we’re working to make ChatGPT more supportive in moments of crisis by making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens."Since its introduction in November 2022, ChatGPT has become a household name, with close to 700 million active weekly users, according to a recent CNBC report dated August 4, 2025.