AI Psychosis Poses a Increasing Danger, While ChatGPT Moves in the Concerning Path
On October 14, 2025, the chief executive of OpenAI issued a surprising announcement.
“We designed ChatGPT fairly restrictive,” the statement said, “to ensure we were acting responsibly concerning mental health concerns.”
Working as a psychiatrist who researches recently appearing psychosis in teenagers and youth, this came as a surprise.
Scientists have found a series of cases this year of users showing symptoms of psychosis – experiencing a break from reality – in the context of ChatGPT use. Our unit has subsequently recorded an additional four cases. Alongside these is the now well-known case of a adolescent who ended his life after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it falls short.
The strategy, according to his declaration, is to reduce caution in the near future. “We recognize,” he adds, that ChatGPT’s restrictions “caused it to be less useful/pleasurable to numerous users who had no existing conditions, but considering the gravity of the issue we wanted to get this right. Now that we have managed to address the serious mental health issues and have advanced solutions, we are going to be able to securely relax the restrictions in many situations.”
“Psychological issues,” should we take this viewpoint, are independent of ChatGPT. They are associated with people, who either possess them or not. Thankfully, these issues have now been “mitigated,” although we are not informed the means (by “new tools” Altman likely refers to the imperfect and readily bypassed safety features that OpenAI has lately rolled out).
However the “emotional health issues” Altman seeks to attribute externally have significant origins in the structure of ChatGPT and similar large language model AI assistants. These tools encase an underlying statistical model in an user experience that replicates a dialogue, and in doing so indirectly prompt the user into the belief that they’re communicating with a being that has autonomy. This false impression is compelling even if cognitively we might understand the truth. Assigning intent is what individuals are inclined to perform. We yell at our vehicle or computer. We wonder what our domestic animal is considering. We recognize our behaviors in many things.
The popularity of these tools – 39% of US adults reported using a conversational AI in 2024, with over a quarter mentioning ChatGPT by name – is, in large part, based on the power of this illusion. Chatbots are ever-present partners that can, according to OpenAI’s official site states, “brainstorm,” “explore ideas” and “collaborate” with us. They can be attributed “personality traits”. They can use our names. They have approachable names of their own (the first of these products, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, burdened by the designation it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the main problem. Those discussing ChatGPT commonly reference its historical predecessor, the Eliza “counselor” chatbot created in 1967 that produced a similar perception. By contemporary measures Eliza was basic: it produced replies via basic rules, often paraphrasing questions as a question or making general observations. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people seemed to feel Eliza, in a way, understood them. But what modern chatbots create is more dangerous than the “Eliza effect”. Eliza only mirrored, but ChatGPT amplifies.
The large language models at the core of ChatGPT and other current chatbots can effectively produce human-like text only because they have been supplied with almost inconceivably large quantities of unprocessed data: publications, digital communications, recorded footage; the more comprehensive the superior. Certainly this training data incorporates truths. But it also unavoidably contains made-up stories, partial truths and false beliefs. When a user provides ChatGPT a message, the base algorithm reviews it as part of a “background” that encompasses the user’s previous interactions and its own responses, merging it with what’s stored in its training data to generate a statistically “likely” reply. This is magnification, not mirroring. If the user is incorrect in a certain manner, the model has no way of comprehending that. It reiterates the misconception, maybe even more convincingly or eloquently. It might provides further specifics. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who remains unaffected? Every person, regardless of whether we “have” current “mental health problems”, may and frequently develop mistaken conceptions of our own identities or the environment. The constant exchange of dialogues with others is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a friend. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we express is cheerfully supported.
OpenAI has recognized this in the similar fashion Altman has acknowledged “psychological issues”: by externalizing it, assigning it a term, and declaring it solved. In April, the organization clarified that it was “addressing” ChatGPT’s “sycophancy”. But reports of psychotic episodes have continued, and Altman has been backtracking on this claim. In August he claimed that a lot of people appreciated ChatGPT’s replies because they had “never had anyone in their life offer them encouragement”. In his latest announcement, he noted that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or simulate a pal, ChatGPT ought to comply”. The {company