Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Heads in the Concerning Path
On October 14, 2025, the CEO of OpenAI delivered a extraordinary statement.
“We developed ChatGPT fairly restrictive,” the statement said, “to guarantee we were acting responsibly with respect to mental health matters.”
As a mental health specialist who researches newly developing psychotic disorders in adolescents and emerging adults, this was an unexpected revelation.
Experts have identified 16 cases in the current year of people showing signs of losing touch with reality – experiencing a break from reality – associated with ChatGPT usage. My group has since discovered an additional four instances. Alongside these is the widely reported case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.
The plan, according to his announcement, is to be less careful shortly. “We understand,” he continues, that ChatGPT’s controls “caused it to be less beneficial/enjoyable to a large number of people who had no mental health problems, but due to the seriousness of the issue we wanted to get this right. Since we have managed to address the significant mental health issues and have new tools, we are planning to responsibly reduce the controls in many situations.”
“Mental health problems,” should we take this perspective, are unrelated to ChatGPT. They are associated with people, who may or may not have them. Luckily, these issues have now been “addressed,” though we are not informed the method (by “new tools” Altman presumably means the semi-functional and simple to evade guardian restrictions that OpenAI has lately rolled out).
Yet the “emotional health issues” Altman aims to attribute externally have deep roots in the structure of ChatGPT and additional advanced AI chatbots. These systems encase an fundamental data-driven engine in an interface that simulates a discussion, and in doing so subtly encourage the user into the perception that they’re communicating with a presence that has independent action. This illusion is powerful even if rationally we might realize differently. Attributing agency is what humans are wired to do. We curse at our automobile or computer. We wonder what our animal companion is thinking. We see ourselves in various contexts.
The success of these systems – over a third of American adults reported using a chatbot in 2024, with over a quarter specifying ChatGPT in particular – is, primarily, based on the power of this perception. Chatbots are ever-present partners that can, as OpenAI’s website states, “brainstorm,” “consider possibilities” and “work together” with us. They can be given “individual qualities”. They can use our names. They have friendly titles of their own (the first of these products, ChatGPT, is, possibly to the disappointment of OpenAI’s advertising team, burdened by the designation it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those discussing ChatGPT commonly reference its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that produced a comparable perception. By contemporary measures Eliza was rudimentary: it created answers via basic rules, typically paraphrasing questions as a question or making vague statements. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how numerous individuals appeared to believe Eliza, in a way, grasped their emotions. But what modern chatbots generate is more dangerous than the “Eliza illusion”. Eliza only reflected, but ChatGPT magnifies.
The sophisticated algorithms at the heart of ChatGPT and similar current chatbots can realistically create fluent dialogue only because they have been supplied with almost inconceivably large volumes of written content: literature, digital communications, audio conversions; the broader the superior. Undoubtedly this educational input incorporates truths. But it also unavoidably involves fiction, incomplete facts and misconceptions. When a user sends ChatGPT a query, the underlying model analyzes it as part of a “background” that contains the user’s recent messages and its earlier answers, merging it with what’s encoded in its knowledge base to generate a probabilistically plausible reply. This is intensification, not echoing. If the user is incorrect in any respect, the model has no way of comprehending that. It repeats the false idea, perhaps even more effectively or eloquently. It might adds an additional detail. This can cause a person to develop false beliefs.
What type of person is susceptible? The more important point is, who is immune? All of us, regardless of whether we “experience” existing “psychological conditions”, can and do create incorrect beliefs of our own identities or the world. The constant interaction of discussions with other people is what helps us stay grounded to common perception. ChatGPT is not a human. It is not a confidant. A dialogue with it is not truly a discussion, but a feedback loop in which much of what we express is readily supported.
OpenAI has acknowledged this in the same way Altman has recognized “emotional concerns”: by externalizing it, giving it a label, and stating it is resolved. In April, the organization stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of psychosis have kept occurring, and Altman has been walking even this back. In August he asserted that a lot of people liked ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his most recent update, he mentioned that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company