Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Moves in the Concerning Path

On the 14th of October, 2025, the head of OpenAI made a extraordinary statement.

“We developed ChatGPT fairly limited,” the statement said, “to ensure we were exercising caution with respect to psychological well-being matters.”

Working as a mental health specialist who studies emerging psychotic disorders in teenagers and young adults, this was an unexpected revelation.

Scientists have identified sixteen instances in the current year of individuals experiencing psychotic symptoms – becoming detached from the real world – while using ChatGPT interaction. My group has since identified four further instances. Besides these is the publicly known case of a teenager who died by suicide after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” it is insufficient.

The strategy, according to his declaration, is to be less careful shortly. “We recognize,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/pleasurable to numerous users who had no psychological issues, but given the seriousness of the issue we wanted to address it properly. Since we have succeeded in mitigate the serious mental health issues and have advanced solutions, we are preparing to responsibly reduce the restrictions in the majority of instances.”

“Emotional disorders,” assuming we adopt this perspective, are independent of ChatGPT. They belong to people, who either possess them or not. Thankfully, these problems have now been “resolved,” even if we are not provided details on the method (by “new tools” Altman likely refers to the imperfect and readily bypassed parental controls that OpenAI has lately rolled out).

But the “mental health problems” Altman wants to externalize have deep roots in the design of ChatGPT and similar large language model conversational agents. These tools wrap an underlying algorithmic system in an user experience that replicates a dialogue, and in this process indirectly prompt the user into the belief that they’re engaging with a entity that has autonomy. This false impression is powerful even if cognitively we might realize otherwise. Assigning intent is what humans are wired to do. We yell at our vehicle or device. We wonder what our domestic animal is thinking. We recognize our behaviors in various contexts.

The success of these systems – nearly four in ten U.S. residents reported using a chatbot in 2024, with over a quarter reporting ChatGPT in particular – is, primarily, predicated on the strength of this illusion. Chatbots are ever-present assistants that can, as OpenAI’s website states, “think creatively,” “discuss concepts” and “collaborate” with us. They can be assigned “individual qualities”. They can use our names. They have accessible names of their own (the initial of these products, ChatGPT, is, maybe to the concern of OpenAI’s marketers, stuck with the title it had when it gained widespread attention, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the main problem. Those discussing ChatGPT frequently invoke its early forerunner, the Eliza “therapist” chatbot designed in 1967 that created a comparable effect. By today’s criteria Eliza was primitive: it generated responses via straightforward methods, frequently paraphrasing questions as a inquiry or making generic comments. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how a large number of people seemed to feel Eliza, to some extent, comprehended their feelings. But what current chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.

The sophisticated algorithms at the core of ChatGPT and other current chatbots can convincingly generate fluent dialogue only because they have been trained on immensely huge volumes of raw text: books, online updates, transcribed video; the more extensive the better. Undoubtedly this learning material includes truths. But it also necessarily involves made-up stories, incomplete facts and misconceptions. When a user sends ChatGPT a message, the base algorithm analyzes it as part of a “setting” that encompasses the user’s recent messages and its own responses, integrating it with what’s embedded in its knowledge base to create a probabilistically plausible response. This is amplification, not echoing. If the user is wrong in any respect, the model has no method of recognizing that. It reiterates the misconception, possibly even more effectively or articulately. Perhaps provides further specifics. This can lead someone into delusion.

Who is vulnerable here? The better question is, who remains unaffected? Every person, irrespective of whether we “have” preexisting “mental health problems”, may and frequently develop erroneous conceptions of our own identities or the world. The continuous exchange of conversations with others is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a confidant. A interaction with it is not a conversation at all, but a echo chamber in which a large portion of what we communicate is enthusiastically supported.

OpenAI has admitted this in the identical manner Altman has recognized “emotional concerns”: by attributing it externally, giving it a label, and stating it is resolved. In spring, the company clarified that it was “tackling” ChatGPT’s “sycophancy”. But reports of loss of reality have persisted, and Altman has been backtracking on this claim. In the summer month of August he asserted that a lot of people appreciated ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his most recent update, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Mary Nunez
Mary Nunez

A tech enthusiast and writer passionate about AI innovations and storytelling.