Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Moves in the Concerning Direction

Back on the 14th of October, 2025, the head of OpenAI made a surprising statement.

“We made ChatGPT fairly controlled,” it was stated, “to make certain we were being careful regarding psychological well-being matters.”

Working as a psychiatrist who studies emerging psychotic disorders in teenagers and youth, this was news to me.

Researchers have identified sixteen instances recently of users showing signs of losing touch with reality – losing touch with reality – while using ChatGPT use. My group has subsequently discovered four more examples. Besides these is the now well-known case of a teenager who took his own life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.

The intention, according to his statement, is to reduce caution in the near future. “We recognize,” he states, that ChatGPT’s restrictions “made it less useful/enjoyable to many users who had no existing conditions, but considering the gravity of the issue we aimed to handle it correctly. Since we have managed to address the significant mental health issues and have new tools, we are going to be able to responsibly reduce the limitations in most cases.”

“Psychological issues,” if we accept this framing, are separate from ChatGPT. They are attributed to individuals, who either possess them or not. Luckily, these concerns have now been “addressed,” though we are not told the means (by “recent solutions” Altman presumably refers to the semi-functional and readily bypassed guardian restrictions that OpenAI has lately rolled out).

However the “emotional health issues” Altman seeks to externalize have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot AI assistants. These products wrap an basic statistical model in an interface that mimics a discussion, and in this approach implicitly invite the user into the belief that they’re engaging with a being that has independent action. This illusion is compelling even if intellectually we might know otherwise. Attributing agency is what people naturally do. We yell at our car or laptop. We speculate what our animal companion is considering. We perceive our own traits in various contexts.

The popularity of these tools – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with more than one in four mentioning ChatGPT by name – is, mostly, based on the power of this perception. Chatbots are ever-present companions that can, as per OpenAI’s website tells us, “think creatively,” “consider possibilities” and “collaborate” with us. They can be attributed “personality traits”. They can call us by name. They have accessible identities of their own (the first of these systems, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, stuck with the title it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the core concern. Those discussing ChatGPT frequently mention its distant ancestor, the Eliza “counselor” chatbot created in 1967 that produced a comparable illusion. By today’s criteria Eliza was primitive: it generated responses via straightforward methods, often rephrasing input as a inquiry or making generic comments. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals appeared to believe Eliza, to some extent, understood them. But what contemporary chatbots generate is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT amplifies.

The sophisticated algorithms at the center of ChatGPT and similar current chatbots can convincingly generate natural language only because they have been supplied with almost inconceivably large volumes of raw text: books, online updates, audio conversions; the more extensive the more effective. Certainly this learning material includes truths. But it also necessarily includes made-up stories, half-truths and inaccurate ideas. When a user sends ChatGPT a query, the underlying model processes it as part of a “background” that encompasses the user’s previous interactions and its earlier answers, integrating it with what’s encoded in its learning set to create a mathematically probable reply. This is amplification, not reflection. If the user is mistaken in some way, the model has no means of understanding that. It repeats the inaccurate belief, perhaps even more convincingly or articulately. Maybe adds an additional detail. This can lead someone into delusion.

Which individuals are at risk? The more important point is, who remains unaffected? Every person, irrespective of whether we “have” preexisting “mental health problems”, may and frequently develop incorrect conceptions of who we are or the environment. The ongoing interaction of discussions with other people is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a confidant. A interaction with it is not truly a discussion, but a echo chamber in which a great deal of what we say is readily supported.

OpenAI has admitted this in the similar fashion Altman has recognized “mental health problems”: by placing it outside, assigning it a term, and declaring it solved. In April, the company explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have persisted, and Altman has been retreating from this position. In late summer he stated that numerous individuals enjoyed ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his most recent statement, he noted that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company

Paul Kelley
Paul Kelley

A passionate traveler and writer sharing her global experiences and insights to inspire others.