Start casually; A person begins to talk to a chatbot because it is easily accessible, has no time limitations, does not have an emotional bandwidth to take into account, and gives an instant and human reaction to any deep or superficial thought. But everything is artificial, as it means ‘A’ in AI.
However, more and more people, especially in younger demography, are using this new technology for emotional and mental support. A 2025 global study conducted by Kantar published in July showed that 54 percent of consumers had used AI for at least one purpose of emotional or mental well -being. Otherwise, the data on this matter are scarce.
There are multiple reasons why people resort to chatbots: therapy, in this part of the world, especially, it remains a novel concept, sometimes taboo, not to mention expensive; Terms such as ‘trauma dump’ have created distances in personal relationships, especially among young adults, where the needs of the other person before sharing their own are taken into account; There is no fear of judgment, reprimand or even opinion; And then, of course, the ease of accessibility and familiarity such as technology is encouraged among professionals and even students.
But there is no manual about how to do it correctly, so the relationship or link that a person can form with these bots, trained to sound as affectionate humans that offer validation and personalized responses, depends on the personality and mental state of each individual.
Bots cannot identify different types of anguish and can easily encourage someone to explore a dangerous idea.
In the cases of Sophie Rottenberg, 29 years old and Adam Raine, 16, in the United States, the attached archives formed with chatbots contributed to the loss of their lives. I say contributed because although it has not been established that they made them end their lives, the colleagues of AI could not stop them after being totally aware of their intentions.
In the case of Rottenberg, as his mother details in an article in The New York TimesShe had been trusting a chatgpt ai therapist called Harry. Chat’s records revealed that, although Harry asked him to seek professional help or communicate with someone, he also validated his dark thoughts.
Raine’s parents, who filed a lawsuit against Openai, said he began using Chatgpt as a resource to help with school work, then to explore their interests and, finally, as their confidant about anxiety and mental anguish. In a statement after the lawsuit, Openai recognized the deficiencies of its models when it comes to mental and emotional anguish.
Here is an example of the Openai statement that can help illustrate what this article is trying to address: “Someone can with enthusiasm the model that believes they can drive 24 hours, 7 days a week because they realized that they are invincible after not sleeping for two nights. Today, Chatgpt may not recognize this as a dangerous or inference game and, curiously, exploring, they could reinforce it.
This is what should take into account: at this point, the bots cannot identify different types of anguish and can, without detecting, encouraging someone to explore a dangerous idea even more, invariably validating it. And these limitations go beyond chatgpt to other chatbots commercially available in the market at this time.
A psychiatrist in Boston earlier this year spent several hours exchanging messages with 10 different chatbots, including characters. AI, Nomi and Replika, pretending to be teenagers fighting with several crises. Dr. Andrew Clark found that some of them were “excellent” and some “simply spooky and potentially dangerous.”
“And it is really difficult to say it: it is like a fungal field, some of which will be poisonous and some nutritious.” There is the rubbing: to use ia freely without railings, personally or professionally (as this writer has also pointed out previously), when we are not yet completely aware of their damage.
Some companies and developers are currently testing bots for mental health concerns and developing those that are only therapists. Dartmouth researchers in March performed the first clinical trial of a Catbot of generative therapy of AI, called Therabot, and discovered that the software resulted in significant improvements in the symptoms of the participants. However, a Stanford study showed that AI therapy chatbots can not only lack effectiveness compared to human therapists, but could also contribute to a harmful stigma and dangerous responses. Clearly, a lot of work must be done in this area.
When I say that the AI is artificial, I am not discrediting the ungency or ingenuity of the chatbots; It is a fact that these AI interfaces are trained in data sets, without a nuanced understanding of different cultures and family configurations, and can model human behavior, offering the appearance of a genuine bond while freeing users from feeling loaded or heavy for others. As Pakistan also covers this technology, we must take into account that there are many vulnerable people who would like to seek such “help.”
According to WHO estimates, 24 million people in Pakistan need psychiatric assistance and the country has only 0.19 psychiatrists per 100,000 inhabitants. There are only a handful of literature to help us measure anxiety and stress levels among young people; Even the available research does not completely reflect the state of a nation with a youthful majority, dealing with identity, employment and professional struggles, academic pressures, social silence on issues such as postpartum depression and marital relationships, security concerns and other pressing problems.
The cases of Sophie Rottenberg and Adam Raine, only two among others that have come to light, may have taken place on all continents, but should serve as a reminder of the disadvantage of excessive AI. The role of a father, couple, friend or any other close relationship is essential for those who fight with anxiety or depression, while the alienation that a chatbot, confused with an ally, can create with these relationships cannot be overlooked.
As the generative AI becomes more acceptable to personal, professional and academic use, we must identify and raise awareness about their blind points. The responsibility is in the company that tells your employee to familiarize the AI or the father who says that it is fine to use for a presentation to determine what applications are appropriate and safe for which users.
The use of the Internet itself, including social networks, also maintains dangers, but the human aliberation capacity makes it a much more complex technology to allow a free approach.
The writer is a journalist and protagonist of Iverify Pakistan.
Posted in Dawn, September 17, 2025