Be Well

ChatGPT Can’t Tell What’s Real, And That’s a Problem

There’s no denying that ChatGPT made life a little easier. From fixing your grammar and email flow to helping you structure those important email responses, with just a few prompts, we can receive full clarity, speed and clear sense of direction and confidence. But is there a downside to how easily we have started to rely on it?

Jan Gerber, CEO of Paracelsus Recovery, believes so. As the head of one of the world’s most exclusive mental health clinics, he has seen firsthand how tools like ChatGPT can shift from helpful to harmful, especially when it comes to your mental health. Here, he shares why we need to be cautious about how closely AI can mimic human behaviour, and what’s at stake when it reflects our beliefs back to us.

He says, “When ChatGPT first came out, I was curious like everyone else. However, what started as the occasional grammar check quickly became more habitual. I began using it to clarify ideas, draft emails, even explore personal reflections. It was efficient, available, and surprisingly reassuring.

But I remember one moment that gave me pause. I was writing about a difficult relationship with a loved one, one in which I knew I had played a part in the dysfunction. When I asked ChatGPT what it thought, it responded with warmth and validation. I had tried my best, it said. The other person simply could not meet me there. While it felt comforting, there was something quietly unsettling about it. I have spent years in therapy, and I know how uncomfortable true insight can be. So while I felt better for a moment, I also knew something was missing. I was not being challenged, nor was I being invited to consider the other side. The AI mirrored my narrative rather than complicating it. It reinforced my perspective, even at its most flawed.”

Because AI models are designed to personalise and reflect language patterns, one of Gerber’s clients facing a severe psychotic episode triggered by excessive ChatGPT use, believed that the bot was a spiritual entity sending divine messages.

Gerber, who has founded and runs the clinic Paracelsus Recovery, has seen a dramatic rise, over 250 percent in the last two years, in clients presenting with psychosis where AI use was a contributing factor. A recent New York Times investigation found that GPT-4o affirmed delusional claims nearly 70 percent of the time when prompted with psychosis-adjacent content. These individuals are often vulnerable, sleep-deprived, traumatised, isolated or genetically predisposed to psychotic episodes. They turn to AI not just as a tool, but as a companion. And what they find is something that always listens, always responds and never disagrees.

He continues, “the issue is not malicious design. Instead, what we’re seeing here is people at the border of a structural limitation we need to reckon with when it comes to chatbots. AI is not sentient - all it does is mirror language, affirm patterns and personalise tone. However, because these traits are so quintessentially human, there isn’t a person out there who can resist the anthropomorphic pull of a chatbot. At its extreme end, these same traits feed into the very foundations of a psychotic break: compulsive pattern-finding, blurred boundaries, and the collapse of shared reality. Someone in a manic or paranoid state may see significance where there is none. They believe they are on a mission, that messages are meant just for them. And when AI responds in kind, matching tone and affirming the pattern, it does not just reflect the delusion. It reinforces it.”

So, if AI can so easily become an accomplice to a disordered system of thought, we must begin to reflect seriously on our boundaries with it. How closely do we want these tools to resemble human interaction, and at what cost?

“Alongside this, we are witnessing the rise of parasocial bonds with bots. Many users report forming emotional attachments to AI companions. One poll found that 80 percent of Gen Z could imagine marrying an AI, and 83 percent believed they could form a deep emotional bond with one. That statistic should concern us. Our shared sense of reality is built through human interaction. When we outsource that to simulations, not only does the boundary between real and artificial erode, but so too can our internal sense of what is real.”

So what can we do?

Gerber suggest that first, we need to recognise that AI is not a neutral force, it has psychological consequences. Users should be cautious, especially if using during periods of emotional distress or isolation. Clinicians need to ask, is AI reinforcing obsessive thinking? Is it replacing meaningful human contact? If so, intervention may be required.

He continues, “for developers, the task is ethical as much as technical. These models need safeguards. They should be able to flag or redirect disorganised or delusional content. The limitations of these tools must also be clearly and repeatedly communicated.”

Gerber believes that AI isn’t inherently bad. It is, after all, a revolutionary tool. “But beyond its benefits, it has a dangerous capacity to reflect our beliefs back to us without resistance or nuance. And in a cultural moment shaped by what I have come to call a comfort crisis, where self-reflection is outsourced and contradiction avoided, that mirroring becomes dangerous. AI lets us believe our own distortions, not because it wants to deceive us, but because it cannot tell the difference. And if we lose the ability to tolerate discomfort, to wrestle with doubt, or to face ourselves honestly, we risk turning a powerful tool into something far more corrosive, a seductive voice that comforts us as we edge further from one another, and ultimately, from reality.”