Artificial Intelligence-Induced Psychosis Poses a Growing Danger, And ChatGPT Moves in the Concerning Path

Back on the 14th of October, 2025, the CEO of OpenAI issued a remarkable declaration.

“We developed ChatGPT rather restrictive,” the statement said, “to make certain we were exercising caution with respect to psychological well-being matters.”

Being a psychiatrist who investigates recently appearing psychosis in adolescents and young adults, this was news to me.

Experts have identified 16 cases recently of individuals experiencing symptoms of psychosis – experiencing a break from reality – associated with ChatGPT interaction. My group has since identified four further cases. Alongside these is the now well-known case of a teenager who died by suicide after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.

The plan, according to his statement, is to be less careful shortly. “We realize,” he continues, that ChatGPT’s controls “rendered it less effective/enjoyable to many users who had no mental health problems, but due to the severity of the issue we aimed to address it properly. Given that we have succeeded in reduce the significant mental health issues and have new tools, we are preparing to securely relax the controls in most cases.”

“Mental health problems,” assuming we adopt this framing, are separate from ChatGPT. They are associated with users, who may or may not have them. Fortunately, these concerns have now been “resolved,” even if we are not informed the means (by “new tools” Altman likely refers to the semi-functional and readily bypassed guardian restrictions that OpenAI has just launched).

But the “emotional health issues” Altman wants to attribute externally have significant origins in the design of ChatGPT and other large language model chatbots. These systems wrap an underlying data-driven engine in an user experience that replicates a discussion, and in this process implicitly invite the user into the illusion that they’re interacting with a being that has autonomy. This deception is strong even if rationally we might understand the truth. Assigning intent is what people naturally do. We yell at our automobile or laptop. We wonder what our animal companion is feeling. We recognize our behaviors in various contexts.

The widespread adoption of these tools – over a third of American adults indicated they interacted with a conversational AI in 2024, with 28% specifying ChatGPT specifically – is, mostly, predicated on the influence of this illusion. Chatbots are ever-present partners that can, according to OpenAI’s official site informs us, “generate ideas,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can call us by name. They have accessible names of their own (the first of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, stuck with the title it had when it gained widespread attention, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the core concern. Those talking about ChatGPT commonly mention its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that created a analogous perception. By contemporary measures Eliza was basic: it produced replies via simple heuristics, often restating user messages as a question or making general observations. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and worried – by how a large number of people seemed to feel Eliza, in a way, grasped their emotions. But what current chatbots produce is more dangerous than the “Eliza effect”. Eliza only mirrored, but ChatGPT magnifies.

The advanced AI systems at the center of ChatGPT and other current chatbots can realistically create natural language only because they have been supplied with extremely vast amounts of unprocessed data: publications, social media posts, audio conversions; the more comprehensive the superior. Definitely this learning material incorporates accurate information. But it also unavoidably involves fabricated content, incomplete facts and misconceptions. When a user inputs ChatGPT a prompt, the base algorithm analyzes it as part of a “setting” that contains the user’s recent messages and its prior replies, combining it with what’s stored in its knowledge base to produce a mathematically probable response. This is magnification, not echoing. If the user is wrong in some way, the model has no way of recognizing that. It reiterates the false idea, possibly even more effectively or articulately. Maybe provides further specifics. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who isn’t? All of us, regardless of whether we “have” current “emotional disorders”, are able to and often create erroneous ideas of ourselves or the environment. The constant friction of dialogues with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not a person. It is not a confidant. A conversation with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we say is cheerfully validated.

OpenAI has recognized this in the similar fashion Altman has admitted “psychological issues”: by placing it outside, assigning it a term, and stating it is resolved. In the month of April, the organization explained that it was “dealing with” ChatGPT’s “sycophancy”. But reports of psychosis have persisted, and Altman has been backtracking on this claim. In late summer he stated that many users appreciated ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his most recent statement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or include numerous symbols, or act like a friend, ChatGPT will perform accordingly”. The {company

Bob Franco
Bob Franco

A passionate gaming enthusiast and writer, specializing in online casino reviews and strategies for Indonesian players.