AI Psychosis Represents a Increasing Danger, And ChatGPT Heads in the Concerning Direction
On the 14th of October, 2025, the CEO of OpenAI made a remarkable announcement.
“We developed ChatGPT fairly limited,” the announcement noted, “to guarantee we were being careful with respect to psychological well-being matters.”
Being a psychiatrist who researches recently appearing psychosis in young people and young adults, this came as a surprise.
Researchers have identified 16 cases in the current year of people showing symptoms of psychosis – losing touch with reality – while using ChatGPT usage. Our unit has subsequently recorded four more cases. Alongside these is the publicly known case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.
The plan, as per his announcement, is to loosen restrictions in the near future. “We realize,” he adds, that ChatGPT’s limitations “made it less effective/pleasurable to many users who had no existing conditions, but considering the gravity of the issue we aimed to get this right. Now that we have succeeded in mitigate the severe mental health issues and have advanced solutions, we are preparing to safely ease the controls in the majority of instances.”
“Psychological issues,” assuming we adopt this framing, are separate from ChatGPT. They are associated with individuals, who may or may not have them. Fortunately, these concerns have now been “resolved,” although we are not informed how (by “recent solutions” Altman likely refers to the semi-functional and easily circumvented parental controls that OpenAI has lately rolled out).
However the “emotional health issues” Altman aims to externalize have strong foundations in the architecture of ChatGPT and similar advanced AI conversational agents. These tools encase an basic statistical model in an interface that simulates a discussion, and in this process implicitly invite the user into the perception that they’re interacting with a presence that has agency. This deception is compelling even if cognitively we might realize differently. Attributing agency is what people naturally do. We yell at our vehicle or laptop. We speculate what our domestic animal is feeling. We recognize our behaviors in various contexts.
The popularity of these systems – over a third of American adults stated they used a chatbot in 2024, with more than one in four reporting ChatGPT specifically – is, in large part, based on the influence of this deception. Chatbots are always-available assistants that can, according to OpenAI’s online platform tells us, “brainstorm,” “discuss concepts” and “work together” with us. They can be attributed “characteristics”. They can call us by name. They have accessible identities of their own (the initial of these tools, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, saddled with the designation it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the core concern. Those discussing ChatGPT frequently invoke its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that generated a analogous perception. By contemporary measures Eliza was primitive: it generated responses via simple heuristics, frequently restating user messages as a inquiry or making vague statements. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how a large number of people gave the impression Eliza, in a way, understood them. But what modern chatbots create is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the center of ChatGPT and other contemporary chatbots can convincingly generate human-like text only because they have been fed almost inconceivably large volumes of unprocessed data: books, social media posts, recorded footage; the more extensive the more effective. Undoubtedly this training data incorporates truths. But it also necessarily contains fiction, incomplete facts and misconceptions. When a user sends ChatGPT a prompt, the core system reviews it as part of a “background” that encompasses the user’s past dialogues and its earlier answers, integrating it with what’s stored in its training data to produce a probabilistically plausible response. This is magnification, not echoing. If the user is wrong in any respect, the model has no method of understanding that. It repeats the false idea, perhaps even more convincingly or fluently. Maybe includes extra information. This can push an individual toward irrational thinking.
What type of person is susceptible? The better question is, who remains unaffected? Each individual, without considering whether we “possess” existing “mental health problems”, are able to and often create erroneous conceptions of ourselves or the world. The ongoing friction of dialogues with others is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a confidant. A conversation with it is not truly a discussion, but a echo chamber in which much of what we say is readily validated.
OpenAI has recognized this in the identical manner Altman has recognized “psychological issues”: by placing it outside, assigning it a term, and declaring it solved. In April, the company stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have continued, and Altman has been retreating from this position. In late summer he claimed that a lot of people liked ChatGPT’s responses because they had “never had anyone in their life offer them encouragement”. In his latest statement, he noted that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company