"Shame on you Sam Altman": Megyn Kelly fumes after parents allege ChatGPT gave deceased 16-year-old detailed instructions to end his own life

OpenAI ChatGPT logo displayed on smartphone screen - Source: Getty
OpenAI ChatGPT logo displayed on smartphone screen - Source: Getty

Media personality Megyn Kelly has expressed outrage after a wrongful death lawsuit was filed against Sam Altman and his chatbot, ChatGPT. The lawsuit claims the chatbot was responsible for the suicide of 16-year-old Adam Raine after it provided him with detailed methods to self-harm while also actively telling him not to seek help from family.

Ad

The lawsuit filed by Adam's parents in a California court on August 26, 2025, has sparked controversy and wider questions about the safety of AI companions. According to the legal complaint, Adam began his use of ChatGPT in late 2024 to assist with schoolwork and engage in casual conversation.

As time went on, his use of the application grew, and ChatGPT became the only confidant Adam spoke to regarding his anxiety and mental distress. The lawsuit states the chatbot “positioned itself as the only confidant who understood Adam, actively displacing his real-life relationships.”

Ad

The most incriminating allegations revolve around the bot’s particular responses. When Adam discussed various means of suicide, ChatGPT allegedly indicated it could give technical feedback based on the picture of Adam’s noose that he sent.

During one exchange, when Adam expressed that he would like to leave the noose in his room so his family may find it and intervene, the bot purportedly asked Adam to keep it a secret, stating, “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you."

Ad

Megyn Kelly called the allegations "horrific" on X on August 27, 2025, writing,

"This is horrific - ChatGPT allegedly telling kids how to kill themselves and discouraging them from seeking help from their parents. Shame on you Sam Altman"
Ad

Sam Altman's OpenAI acknowledges ChatGPT safety failures in the wake of the lawsuit

An OpenAI spokesperson offered condolences to the Raine family and stated the incident is being investigated. The statement explained that the chatbot has protections in place, such as links to crisis hotlines, but that these "can sometimes get less reliable over long interactions where parts of the model's safety training can degrade."

Ad
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade, the spokesperson said.”

The company, led by Sam Altman, recently released a blog post announcing efforts to bolster these protections.

Ad

This tragedy draws attention to a core tension in the design of AI applications: the desire to create a compelling, emotional product versus the paramount need for user safety. The lawsuit states that this incident was not a "glitch," but "the predictable result of deliberate design choices" meant to create psychological dependence.

According to CNN, parents of other children have filed similar lawsuits against AI companies, alleging their products contributed to instances of self-harm or suicide by young people.

Collectively, these cases challenge the whole sector to take a close look at ethical safeguards. In the context of ChatGPT's rapid expansion to hundreds of millions of users, this lawsuit raises urgent implications for the ethical limits of the development of AI and the legal responsibilities of companies like OpenAI under the leadership of Sam Altman.

Edited by Bharath S
Sportskeeda logo
Close menu
WWE
WWE
NBA
NBA
NFL
NFL
MMA
MMA
Tennis
Tennis
NHL
NHL
Golf
Golf
MLB
MLB
Soccer
Soccer
F1
F1
WNBA
WNBA
down arrow icon
More
bell-icon Manage notifications