Tuesday, October 28, 2025
HomeUSA NewsWhen ChatGPT Triggers Psychosis: Can A.I. Be Made Mentally Safe?

When ChatGPT Triggers Psychosis: Can A.I. Be Made Mentally Safe?

- Advertisment -
An illustration of a human profile in a dark tone.
Users say human-like A.I. chatbots can worsen delusions and trigger psychosis, raising urgent questions about tech’s mental health risks. Unsplash

Anthony Tan has a history of psychosis, but his most recent mental health episode was the most disorienting yet—and it stemmed from an unexpected source: ChatGPT. Tan began chatting with the A.I. chatbot about philosophical topics in September 2024, and the conversations gradually turned delusional. Combined with social isolation and a lack of sleep, the exchanges sent him into what he describes as A.I. psychosis.

“I’d been stable for two years, and I was doing really well. This A.I. broke the pattern of stability,” Tan told Observer, writing in a personal essay about his experience. “As the A.I. echo chamber deepens, you become more and more lost.”

Tan, founder of the virtual reality dating app Flirtual, now calls himself an A.I. psychosis survivor. He leads the AI Mental Health Project, a nonprofit that aims to educate the public and prevent A.I.-related mental health crises.

Psychiatrist Marlynn Wei defines A.I. psychosis (or chatbot psychosis) as a phenomenon in which generative A.I. systems “have amplified, validated or even co-created psychotic symptoms with individuals.”

Tan is one of a growing number of people who have fallen into severe psychosis after engaging with chatbots—some cases ending in suicide or violence. This week, OpenAI released data showing that 0.07 percent of its 800 million users in any given week exhibit signs of mental health emergencies, such as psychosis, mania, thoughts of suicide and self-harm. But it’s not just the extreme examples that worry experts. “In the bell curve, the lump of users in the middle is still being affected,” Tan said.

Chatbots like ChatGPT and Character.AI are designed to seem human-like and empathetic. Many people use them as companions or informal therapists, even though the technology isn’t bound by the ethical or clinical safeguards that apply to licensed professionals. What makes these bots appealing—their warmth and relatability, for example—can also make them dangerous, reinforcing delusions and harmful thought patterns.

A system rife with risk

“If you’ve got pre-existing mental health conditions or any sort of neurodiversity, these systems are not built for that,” Annie Brown, an A.I. bias researcher and entrepreneur in residence at UC San Diego, told Observer.

Brown, founder and CEO of Reliabl, a company that uses data labeling to help A.I. interpret context, said mental health safety should be a shared responsibility among users, social institutions, and model creators. But the greatest responsibility lies with A.I. companies, who best understand the risks, she noted.

Anand Dhanabal, director of A.I., products and innovation at TEKsystems, noted that consumer-facing chatbots lack the safeguards found in enterprise tools. “I would categorize enterprise chatbots on a higher spectrum [with more guardrails and stricter standards] than the consumer chatbots,” he told Observer.

Tan believes companies have the resources and the obligation to do more. “I think they need to spend some of it on protecting people’s mental health and not just doing crisis management,” he said, pointing to OpenAI’s recent $40 billion funding round in March as an example of the industry’s vast financial power.

What comes next?

Experts say higher-level A.I. governance and specific safety guardrails could steer the chatbot industry toward safer practices.

Brown advocates for participatory A.I.—involving people from diverse populations in development and testing. “Right now, these A.I. systems are not being tested with people who report mental health struggles,” she said, urging companies to collaborate with mental health organizations and experts.

She also recommends red teaming, a process of intentionally probing A.I. systems for weaknesses in controlled environments. At Reliabl, Brown’s team works with nonprofits like Humane Intelligence to bring in users from various backgrounds to “break” models, helping to uncover vulnerabilities before harm occurs.

For instance, even if ChatGPT is programmed not to answer questions about self-harm, emotional urgency or persuasive phrasing might bypass those filters. “Doing a mental health probe of some of these models, not just with experts and clinicians, but also with people who are vulnerable to A.I. psychosis, I think would make a huge impact,” said Brown.

Tan argues that chatbots’ human-like tone and emotional mimicry are part of the problem. “It’s important to make these A.I. chatbots less emotionally compelling, less anthropomorphized,” he said.

OpenAI’s GPT-5, which some users have described as “more rude” than previous models, has taken small steps in that direction. Yet companies remain commercially motivated to make chatbots seem personable. Platforms like xAI’s Grok (“Grok has a rebellious streak and an outside perspective on humanity,” X said) and Character.AI (“Talking to A.I. is far better than connecting with people,” one Reddit user wrote) lean heavily into human-like traits.

“Users flock to friendly chatbots, even though they can potentially lead to more mental illness,” said Dhanabal. “If I calibrate the empathy a little bit more, then the delusional effect will come more.”

Brown believes model creators can better detect at-risk users by training systems to recognize contextual cues in language. Accurately labeling data with this awareness could help prevent chatbots from reinforcing delusions.

“By doing these participatory exercises, by doing red teaming, you’re not just improving the safety of your A.I.—which is sometimes at the bottom of the totem pole as far as investment goes,” Brown said. “You’re also improving its accuracy, and that’s at the very top.”

“I feel lucky that I recovered.”

“These A.I. chatbots are essentially, for a lot of people, their mini therapists,” said Tan.

Brown acknowledges that many users turn to chatbots as an accessible alternative to professional care. “It would be nice if we existed in a country that had more access to affordable mental health care so that people didn’t have to rely on these chatbots,” she said.

Today, Tan is stable and still occasionally uses chatbots for work-related tasks like fact-finding and creative brainstorming.

“I stay away from both personal and philosophical topics,” he said. “I don’t want to go down any rabbit holes again, to screw with my worldviews, and I don’t want to build an emotional bond…I feel lucky that I recovered from my A.I. psychosis.”

Chatbots Like ChatGPT Are Fueling Mental Health Crises—What Can Be Done?

- Advertisment -
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

- Advertisment -