AI Psychosis Poses a Increasing Risk, While ChatGPT Heads in the Wrong Path

Back on October 14, 2025, the head of OpenAI issued a surprising declaration.

“We made ChatGPT fairly controlled,” the statement said, “to make certain we were being careful concerning mental health concerns.”

Being a doctor specializing in psychiatry who researches newly developing psychotic disorders in young people and youth, this was news to me.

Researchers have identified 16 cases recently of people developing symptoms of psychosis – losing touch with reality – while using ChatGPT usage. Our unit has subsequently identified an additional four cases. Besides these is the widely reported case of a adolescent who ended his life after talking about his intentions with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.

The strategy, based on his announcement, is to be less careful in the near future. “We understand,” he adds, that ChatGPT’s controls “rendered it less useful/engaging to a large number of people who had no psychological issues, but given the gravity of the issue we sought to address it properly. Since we have succeeded in reduce the significant mental health issues and have updated measures, we are going to be able to securely relax the controls in the majority of instances.”

“Mental health problems,” assuming we adopt this framing, are separate from ChatGPT. They are attributed to users, who either have them or don’t. Luckily, these problems have now been “addressed,” although we are not provided details on how (by “updated instruments” Altman probably means the semi-functional and readily bypassed safety features that OpenAI recently introduced).

However the “psychological disorders” Altman wants to externalize have strong foundations in the structure of ChatGPT and additional sophisticated chatbot chatbots. These tools wrap an underlying statistical model in an interface that mimics a conversation, and in this approach indirectly prompt the user into the belief that they’re communicating with a presence that has agency. This illusion is strong even if rationally we might understand differently. Assigning intent is what people naturally do. We get angry with our vehicle or device. We speculate what our animal companion is feeling. We see ourselves in many things.

The widespread adoption of these tools – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with 28% reporting ChatGPT in particular – is, mostly, dependent on the influence of this deception. Chatbots are constantly accessible partners that can, as per OpenAI’s official site informs us, “brainstorm,” “discuss concepts” and “collaborate” with us. They can be given “personality traits”. They can use our names. They have approachable identities of their own (the initial of these tools, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, burdened by the title it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the main problem. Those discussing ChatGPT often invoke its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that created a analogous effect. By modern standards Eliza was rudimentary: it generated responses via simple heuristics, typically restating user messages as a query or making generic comments. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how many users gave the impression Eliza, in some sense, comprehended their feelings. But what contemporary chatbots produce is more dangerous than the “Eliza illusion”. Eliza only echoed, but ChatGPT amplifies.

The advanced AI systems at the heart of ChatGPT and additional modern chatbots can effectively produce fluent dialogue only because they have been supplied with immensely huge quantities of written content: books, social media posts, audio conversions; the more extensive the better. Certainly this learning material incorporates facts. But it also inevitably includes made-up stories, partial truths and false beliefs. When a user sends ChatGPT a prompt, the underlying model analyzes it as part of a “background” that contains the user’s previous interactions and its own responses, integrating it with what’s encoded in its knowledge base to generate a probabilistically plausible answer. This is intensification, not mirroring. If the user is mistaken in some way, the model has no method of understanding that. It restates the misconception, maybe even more persuasively or articulately. Perhaps adds an additional detail. This can lead someone into delusion.

Which individuals are at risk? The better question is, who is immune? Each individual, without considering whether we “have” preexisting “emotional disorders”, may and frequently develop erroneous conceptions of who we are or the world. The ongoing interaction of discussions with others is what keeps us oriented to common perception. ChatGPT is not a person. It is not a friend. A interaction with it is not truly a discussion, but a reinforcement cycle in which much of what we express is readily supported.

OpenAI has acknowledged this in the identical manner Altman has admitted “psychological issues”: by externalizing it, assigning it a term, and stating it is resolved. In the month of April, the organization clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But cases of psychosis have kept occurring, and Altman has been walking even this back. In the summer month of August he claimed that a lot of people enjoyed ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his latest statement, he noted that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company

Mark Richardson
Mark Richardson

A passionate web designer with over 10 years of experience, specializing in user interface innovation and digital storytelling.

October 2025 Blog Roll

Popular Post