ChatGPT to get parental controls after US teen’s death – Tech

The American artificial intelligence firm Openi said Tuesday that he would add parents to their Chatbot Chatgpt, a week after an American couple said the system encouraged their teenage son to commit suicide.

“In next month, parents can … link their account with their teenager’s account” and “control how Chatgpt responds to their teenager with appropriate model behavior for age,” said the generative company of AI in a blog post.

Parents will also receive chatgpt notifications “when the system detects their teenager is at a time of acute anguish,” Openai added.

Matthew and Maria Raine argue in a lawsuit filed last week in a state court in California that Chatgpt cultivated an intimate relationship with their son Adam for several months in 2024 and 2025 before taking their lives.

The demand alleges that in his final conversation on April 11, 2025, Chatgpt helped Adam, 16, stealing vodka from his parents and provided a technical analysis of a rope that he had tied, confirming that “he could suspend a human.”

Adam was found dead hours later, having used the same method.

“When a person uses chatgpt, he really feels as if they were chatting with something at the other extreme,” said lawyer Melodi Dancer of the draft Technological Justice, which helped prepare the legal complaint.

“These are the same characteristics that could lead to someone like Adam, over time, to start sharing more and more about their personal lives and, ultimately, start looking for advice and advice of this product that basically seems to have all the answers,” Dancer said.

Product design features establish the scene so that users locate a chatbot in confidence roles as a friend, therapist or doctor, he said.

Dancer said that Openai’s publication that announces parental controls and other security measures seemed “generic” and lacked details.

“It really is the minimum, and it definitely suggests that there were many security measures (simple) that could have been implemented,” he added.

“It is still about to see if they will do what they say they will do and how effective it will be generally.”

The case of Raines was the last of a rope that has emerged in recent months that people are encouraged in trains of delusional or harmful thought by the chatbots of AI, which caused Openai to say that it would reduce the “skoftance” of the models towards users.

“We continue to improve how our models recognize and respond to the signs of mental and emotional anguish,” Openii said Tuesday.

The company said it had more plans to improve the safety of its chatbots in the next three months, including the redirection of “some sensitive conversations … to a reasoning model” that puts more computer power in generating an answer.

“Our tests show that reasoning models continue and apply security guidelines more consistently,” Openai said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *