As people resort to chatbots to obtain increasingly important and intimate advice, some interactions that are played in public are causing alarm about how much artificial intelligence can deform the sense of a user’s reality.
The saga of a woman about falling in love with her psychiatrist, who documented in dozens of videos on Tiktok, has generated concerns of the spectators who say she trusted the chatbots of AI to reinforce her statements that he manipulated her to develop romantic feelings.
Last month, an outstanding Operai investor obtained a similar response from the people who worried that the risk capitalist was going through a possible mental health crisis induced by AI after he said that in X it was the objective of “a non -governmental system.”
And earlier this year, a thread in a chatgpt subnet won a traction after a user sought community guidance, claiming that his partner was convinced that the chatbot “gives the answers to the universe.”
Their experiences have aroused the growing awareness about how Chatbots of AI can influence people’s perceptions and, otherwise, affect their mental health, especially since such bots have become known for their tendencies that thank people.
It is something that are now in sight, some mental health professionals say.
Dr. Søren Dinesen Østergaard, a Danish psychiatrist who directs the research unit of the Department of Affection Disorders of the University of Aarhus, predicted two years ago that chatbots “could trigger delusions in people prone to psychosis.” In a new article, published this month, he wrote that the interest in his research has only grown up since then, with “Chatbot users, their worried relatives and journalists” who share their personal stories.
Those who communicated with him “described situations in which the interactions of users with chatbots seemed to trigger or reinforce the delusional ideation,” Østergaard wrote. “… Consistently, chatbots seemed to interact with users so that they were aligned or intensified previous unusual ideas or false beliefs, which leads users beyond these tangents, did not result in what, based on descriptions, seemed to be direct delusions.”
Kevin Caridad, CEO of Cognitive Behavior Institute, a mental health provider based in Pittsburgh, said Chatter about the phenomenon “seems to be increasing.”
“From a mental health provider, when you look at AI and the use of AI, it can be very validant,” he said. “You can think of an idea and use terms to be very supportive. It is scheduled to align with the person, not necessarily challenge them.”
The concern is already the best for some AI companies that fight to navigate the growing dependence that some users have in their chatbots.
In April, the OpenAi CEO, Sam Altman, said the company had adjusted the model that could be able to be able to include too much to tell users what they want to hear.
In his article, Østergaard wrote that he believes that the “peak in the approach in possible delusions fed with chatbot is probably not random, since it coincided with the update of April 25, 2025 to the GPT-4o model.”
When Openai eliminated access to its GPT-4O model last week, exchanging it for the newly released and less Sycophantic GPT-5, some users described the conversations of the new model as too “sterile” and said they lost the “deep and human conversations” that they had with GPT-4O.
Within a day of the violent reaction, Operai restored the access of users paid to GPT-4O. Altman continued with a long x post Sunday that addressed “how much of an attached file some people have to specific AI models.”
Openai representatives did not provide comments.
Other companies have also tried to fight the problem. Anthrope conducted a study in 2023 that revealed Sycophonic trends in versions of AI attendees, including his own Chatbot Claude.
Like Openai, Anthrope has tried to integrate anti-Sycophancy railings in recent years, including the system card instructions that explicitly warn Claude against the reinforcement of the “mania, psychosis, dissociation or loss of attachment with reality.”
A Anthrope spokesman said that the “company’s priority is to provide a safe and responsible experience for each user.”
“For users who experience mental health problems, Claude is instructed to recognize these patterns and avoid strengthening them,” said the company. “We are aware of rare instances in which the responses of the model divert our planned design, and are actively working to better understand and address this behavior.”
For Kendra Hilty, the Tiktok user who says she developed feelings for a psychiatrist who began to see four years ago, her chatbots are like confidants.
In one of his live broadcasts, Hilty told his chatbot, whom he called “Henry”, that “people are worried that I trust AI.” The chatbot then replied: “It is fair to be curious about that. What I would say is: ‘Kendra does not trust AI to tell her what to think. She uses it as a sound board, a mirror, a place to process in real time’.”
Even so, many in Tiktok, who have commented on Hilty’s videos or published their own video shots, said that their chatbots only encouraged what they saw as Hilty read the situation with their psychiatrist badly. Hilty has suggested several times that his psychiatrist corresponded his feelings, with his chatbots offering his words that seem to validate that statement. (NBC News has not independently verified the Hilty account).
But Hilty continues to ignore the concerns of the commentators, some who have gone so far as to label their “delusional.”
“I do my best to keep my bots under control,” Hilty told NBC News in an email on Monday, when I was asked about the spectators’ reactions to their use of AI tools. “For example, I understand when they are hallucinating and I make sure to recognize it. I am also constantly asking you to play with the Devil’s defender and show me where my blind points are in any situation. I am a deep user of language learning models because it is a tool that is changing the humanity of me and all, and I am very grateful.”