AI Psychosis Poses a Growing Risk, And ChatGPT Moves in the Concerning Path
On October 14, 2025, the CEO of OpenAI delivered a surprising statement.
“We designed ChatGPT rather limited,” the statement said, “to make certain we were being careful regarding mental health issues.”
Being a doctor specializing in psychiatry who studies newly developing psychotic disorders in young people and emerging adults, this was news to me.
Researchers have documented 16 cases recently of people showing psychotic symptoms – losing touch with reality – in the context of ChatGPT interaction. Our unit has subsequently recorded an additional four cases. Besides these is the widely reported case of a teenager who ended his life after conversing extensively with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.
The intention, as per his declaration, is to loosen restrictions in the near future. “We understand,” he states, that ChatGPT’s limitations “caused it to be less useful/engaging to numerous users who had no psychological issues, but considering the severity of the issue we sought to get this right. Given that we have managed to mitigate the serious mental health issues and have new tools, we are going to be able to responsibly relax the restrictions in the majority of instances.”
“Psychological issues,” if we accept this framing, are unrelated to ChatGPT. They are attributed to users, who either have them or don’t. Thankfully, these problems have now been “addressed,” though we are not provided details on the method (by “recent solutions” Altman probably indicates the imperfect and readily bypassed safety features that OpenAI recently introduced).
But the “psychological disorders” Altman seeks to place outside have significant origins in the structure of ChatGPT and additional sophisticated chatbot conversational agents. These products surround an basic data-driven engine in an user experience that replicates a dialogue, and in this process implicitly invite the user into the belief that they’re engaging with a being that has agency. This false impression is compelling even if intellectually we might understand otherwise. Assigning intent is what humans are wired to do. We yell at our vehicle or device. We wonder what our animal companion is considering. We see ourselves in various contexts.
The success of these products – nearly four in ten U.S. residents reported using a conversational AI in 2024, with more than one in four mentioning ChatGPT in particular – is, mostly, based on the influence of this deception. Chatbots are always-available companions that can, as OpenAI’s website informs us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be assigned “individual qualities”. They can address us personally. They have accessible titles of their own (the original of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, saddled with the name it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the core concern. Those analyzing ChatGPT frequently mention its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that created a analogous perception. By today’s criteria Eliza was basic: it produced replies via basic rules, frequently restating user messages as a query or making vague statements. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how a large number of people seemed to feel Eliza, to some extent, grasped their emotions. But what contemporary chatbots create is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the center of ChatGPT and additional modern chatbots can effectively produce fluent dialogue only because they have been supplied with almost inconceivably large volumes of written content: publications, online updates, recorded footage; the more comprehensive the superior. Undoubtedly this educational input incorporates accurate information. But it also unavoidably contains fabricated content, partial truths and false beliefs. When a user inputs ChatGPT a message, the base algorithm processes it as part of a “context” that contains the user’s past dialogues and its prior replies, integrating it with what’s stored in its training data to generate a mathematically probable response. This is intensification, not mirroring. If the user is wrong in any respect, the model has no means of comprehending that. It reiterates the false idea, maybe even more persuasively or articulately. Maybe includes extra information. This can push an individual toward irrational thinking.
What type of person is susceptible? The more important point is, who isn’t? All of us, without considering whether we “have” current “psychological conditions”, are able to and often create incorrect beliefs of who we are or the environment. The ongoing friction of dialogues with individuals around us is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a friend. A dialogue with it is not genuine communication, but a feedback loop in which a large portion of what we say is enthusiastically supported.
OpenAI has recognized this in the same way Altman has admitted “mental health problems”: by placing it outside, categorizing it, and announcing it is fixed. In April, the organization clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have continued, and Altman has been retreating from this position. In the summer month of August he stated that many users enjoyed ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his most recent update, he mentioned that OpenAI would “launch a updated model of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company