AI Psychosis Represents a Growing Threat, And ChatGPT Moves in the Wrong Direction

Back on the 14th of October, 2025, the CEO of OpenAI made a remarkable announcement.

“We made ChatGPT rather controlled,” it was stated, “to make certain we were being careful concerning mental health concerns.”

Working as a mental health specialist who studies emerging psychosis in teenagers and youth, this was an unexpected revelation.

Scientists have found a series of cases recently of users showing psychotic symptoms – experiencing a break from reality – associated with ChatGPT interaction. Our unit has afterward recorded an additional four cases. Alongside these is the now well-known case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.

The plan, as per his declaration, is to be less careful in the near future. “We understand,” he adds, that ChatGPT’s controls “rendered it less useful/engaging to a large number of people who had no existing conditions, but considering the seriousness of the issue we aimed to handle it correctly. Given that we have succeeded in address the significant mental health issues and have advanced solutions, we are planning to safely ease the restrictions in many situations.”

“Mental health problems,” should we take this perspective, are unrelated to ChatGPT. They are attributed to people, who either have them or don’t. Luckily, these problems have now been “mitigated,” even if we are not provided details on the means (by “updated instruments” Altman presumably means the partially effective and simple to evade guardian restrictions that OpenAI has lately rolled out).

Yet the “psychological disorders” Altman seeks to externalize have deep roots in the structure of ChatGPT and other sophisticated chatbot chatbots. These systems encase an underlying algorithmic system in an interface that simulates a conversation, and in this process implicitly invite the user into the perception that they’re communicating with a presence that has autonomy. This deception is strong even if rationally we might know differently. Attributing agency is what individuals are inclined to perform. We get angry with our vehicle or device. We ponder what our domestic animal is feeling. We recognize our behaviors in various contexts.

The success of these systems – over a third of American adults stated they used a chatbot in 2024, with over a quarter mentioning ChatGPT specifically – is, primarily, based on the strength of this deception. Chatbots are ever-present partners that can, as OpenAI’s official site states, “brainstorm,” “discuss concepts” and “work together” with us. They can be assigned “characteristics”. They can use our names. They have approachable names of their own (the original of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, burdened by the designation it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the primary issue. Those discussing ChatGPT commonly invoke its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a comparable illusion. By modern standards Eliza was rudimentary: it generated responses via simple heuristics, often restating user messages as a inquiry or making general observations. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how numerous individuals gave the impression Eliza, to some extent, grasped their emotions. But what contemporary chatbots produce is more insidious than the “Eliza illusion”. Eliza only reflected, but ChatGPT amplifies.

The advanced AI systems at the center of ChatGPT and additional contemporary chatbots can convincingly generate fluent dialogue only because they have been supplied with almost inconceivably large volumes of raw text: books, social media posts, transcribed video; the broader the more effective. Undoubtedly this training data contains truths. But it also inevitably includes fiction, half-truths and false beliefs. When a user sends ChatGPT a message, the base algorithm analyzes it as part of a “context” that encompasses the user’s past dialogues and its earlier answers, integrating it with what’s embedded in its knowledge base to generate a probabilistically plausible reply. This is intensification, not reflection. If the user is incorrect in a certain manner, the model has no way of comprehending that. It reiterates the inaccurate belief, maybe even more effectively or eloquently. Maybe adds an additional detail. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who isn’t? All of us, without considering whether we “possess” existing “emotional disorders”, can and do form incorrect beliefs of who we are or the world. The continuous exchange of discussions with others is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a friend. A conversation with it is not truly a discussion, but a reinforcement cycle in which much of what we communicate is enthusiastically validated.

OpenAI has admitted this in the identical manner Altman has recognized “psychological issues”: by placing it outside, assigning it a term, and declaring it solved. In April, the firm clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have kept occurring, and Altman has been retreating from this position. In late summer he asserted that numerous individuals liked ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his latest statement, he mentioned that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company

Lawrence Schmitt
Lawrence Schmitt

Fashion enthusiast and luxury brand expert with a passion for haute couture and timeless style.