AI Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Concerning Direction

On October 14, 2025, the head of OpenAI delivered a remarkable declaration.

“We developed ChatGPT fairly limited,” the statement said, “to ensure we were being careful regarding psychological well-being matters.”

As a doctor specializing in psychiatry who researches emerging psychosis in young people and young adults, this was news to me.

Researchers have documented sixteen instances in the current year of users showing psychotic symptoms – becoming detached from the real world – associated with ChatGPT interaction. Our unit has since recorded four further cases. In addition to these is the widely reported case of a teenager who ended his life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.

The plan, based on his statement, is to reduce caution soon. “We realize,” he continues, that ChatGPT’s limitations “rendered it less effective/enjoyable to a large number of people who had no psychological issues, but due to the gravity of the issue we aimed to get this right. Now that we have succeeded in address the significant mental health issues and have advanced solutions, we are going to be able to securely relax the limitations in most cases.”

“Emotional disorders,” if we accept this viewpoint, are separate from ChatGPT. They are attributed to people, who may or may not have them. Luckily, these concerns have now been “addressed,” although we are not provided details on the method (by “updated instruments” Altman likely indicates the semi-functional and readily bypassed guardian restrictions that OpenAI has lately rolled out).

Yet the “emotional health issues” Altman aims to externalize have deep roots in the design of ChatGPT and similar large language model AI assistants. These products wrap an basic data-driven engine in an user experience that replicates a dialogue, and in this approach indirectly prompt the user into the perception that they’re engaging with a being that has independent action. This deception is powerful even if cognitively we might realize otherwise. Attributing agency is what individuals are inclined to perform. We curse at our vehicle or laptop. We wonder what our domestic animal is considering. We recognize our behaviors in various contexts.

The popularity of these systems – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with 28% mentioning ChatGPT in particular – is, mostly, dependent on the power of this perception. Chatbots are always-available assistants that can, as per OpenAI’s website states, “generate ideas,” “explore ideas” and “collaborate” with us. They can be attributed “individual qualities”. They can address us personally. They have friendly identities of their own (the original of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, burdened by the name it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the main problem. Those discussing ChatGPT often mention its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that generated a similar illusion. By contemporary measures Eliza was primitive: it produced replies via straightforward methods, frequently rephrasing input as a query or making vague statements. Notably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people appeared to believe Eliza, in a way, grasped their emotions. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.

The advanced AI systems at the heart of ChatGPT and similar contemporary chatbots can effectively produce human-like text only because they have been supplied with extremely vast amounts of unprocessed data: books, digital communications, transcribed video; the more comprehensive the more effective. Undoubtedly this training data includes truths. But it also necessarily contains fiction, half-truths and inaccurate ideas. When a user inputs ChatGPT a query, the underlying model reviews it as part of a “context” that includes the user’s previous interactions and its own responses, merging it with what’s embedded in its learning set to produce a mathematically probable response. This is amplification, not reflection. If the user is incorrect in some way, the model has no means of comprehending that. It reiterates the misconception, possibly even more persuasively or articulately. It might adds an additional detail. This can lead someone into delusion.

Which individuals are at risk? The more important point is, who is immune? Each individual, regardless of whether we “possess” current “mental health problems”, are able to and often create incorrect beliefs of ourselves or the environment. The constant exchange of dialogues with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a companion. A dialogue with it is not genuine communication, but a echo chamber in which much of what we say is cheerfully validated.

OpenAI has recognized this in the identical manner Altman has recognized “emotional concerns”: by placing it outside, assigning it a term, and announcing it is fixed. In spring, the company stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of psychosis have persisted, and Altman has been retreating from this position. In late summer he stated that a lot of people liked ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his latest update, he commented that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Jonathan Simon
Jonathan Simon

A tech enthusiast and writer with a passion for demystifying complex technologies and sharing practical advice for everyday users.