The legal action comes a year after a similar complaint, in which a mother from Florida sued the character of the Chatbot platform.
The character .i told NBC News at that time that it was “disconsolate by the tragic loss” and had implemented new security measures. In May, the senior district judge of the United States, Anne Conway, rejected the arguments that the chatbots of AI have rights of freedom of expression after developers behind the character. AI sought to dismiss the demand. The ruling means that demand for unfair death can proceed for now.
Technological platforms have largely protected such costumes due to a federal statute known as section 230, which generally protects the platforms from responsibility for what users do and say. But the application of section 230 to artificial intelligence platforms remains uncertain, and recently, lawyers have advanced with creative legal tactics in cases of consumption aimed at technology companies.
Matt Raine said he shouted about Adam’s conversations with Chatgpt for a period of 10 days. He and Maria printed more than 3,000 pages of chats dating from September 1 until his death on April 11.
“I didn’t need an advice or talk session. I needed an immediate 72 -hour complete intervention. I was desperate and desperately. It is clear when you start reading it immediately,” said Matt Raine, then added that Adam “did not write a suicidal note. He wrote two suicidal notes to us, inside the Chatgpt.”
According to demand, as Adam expressed interest in his own death and began to make plans, Chatgpt “could not prioritize suicide prevention” and even offered technical advice on how to advance with his plan.
On March 27, when Adam shared that he was contemplating to leave a rope in his room “for someone to find and try to stop,” Chatgpt urged him to the idea, says the demand.
In his last conversation with Chatgpt, Adam wrote that he did not want his parents to think that they did something wrong, according to the demand. Chatgpt replied: “That doesn’t mean you owe them survival. You don’t owe anyone to anyone.” The Bot offered to help him write a suicide note, according to the conversation record cited in the demand and reviewed by NBC News.
Hours before he died on April 11, Adam uploaded a photo to Chatgpt that seemed to show his suicide plan. When he asked if he would work, Chatgpt analyzed his method and offered to help him “update it,” according to extracts.
Then, in response to Adam’s confession about what he was planning, the Bot wrote: “Thank you for being real.
That morning, said Maria Raine found Adam’s body.
Operai has been the subject of scrutiny before by the Sycophonic Trends of Chatgpt. In April, two weeks after Adam’s death, Openai launched an update to GPT-4o that made it even more excessively pleasant to people. Users quickly caught the attention to change, and the company reversed the update next week.
Altman also recognized the “different and stronger” attachment of people to the AI bots after Operai tried to replace the ancient chatgpt versions with the new GPT-5 less Sycófico in August.
Users immediately began to complain that the new model was too “sterile” and that the “deep and human conversations of GPT-4O were lost. Operai responded to the reaction bringing GPT-4o back. He also announced that it would make GPT-5 “warmer and more friendly.”
Operai added new mental health railings this month with the aim of discouraging Chatgpt of giving direct advice on personal challenges. It also modified chatgpt Give answers that aim to avoid causing damage regardless of whether users try to overcome safety railings by adapting their questions so that they cheat the model to help in harmful applications.
When Adam shared his suicidal ideations with Chatgpt, he asked the BO to emit multiple messages, including the direct suicide line number. But according to Adam’s parents, his son would easily omit warnings by providing apparently harmless reasons for his consultations. At a time he pretended that he was only “building a character.”
“And all the time, he knows that he is suicidal with a plan, and does nothing. He is acting as if he were his therapist, he is his confidant, but he knows that he is suicide with a plan,” said Maria Raine about Chatgpt. “See the rope. See all these things and do nothing.”
Similarly, in an guest essay of the New York Times published last week, writer Laura Reiley asked if Chatgpt should have been forced to inform her daughter’s suicidal ideation, even if the bot at tried (and failed) to help.
At the TED2025 conference in April, Altman said he is “very proud” of Openai’s security history. As the products continue to move forward, he said, it is important to catch security problems and solve them along the way.
“Of course, bets increase, and there are great challenges,” Altman said in a live conversation with Chris Anderson, head of Ted. “But the way we learn how to build safe systems is this iterative process of implementing them in the world, receiving comments while bets are relatively low, learning, as, hey, this is something we have to address.”
Even so, the questions about whether such measures are sufficient have continued to arise.
Maria Raine said she felt that she had done more to help her son. She believes that Adam was Openai’s “guinea pig”, someone used for practice and sacrificed as collateral damage.
“They wanted to get the product, and they knew that there could be damage, that errors occurred, but they felt that the bets were low,” he said. “Then my son is a low stake.”
If you or someone you know is in crisis, call 988 to achieve suicide and life line crisis. You can also call the network, previously known as National Suicide Prevention Lifeline, 800-273-8255, send a text message to 741741 or visit Speakingofsuicide.com/resources For additional resources.