Why People Are Turning Chatbots Into Prophets
A strange and unsettling trend has emerged in recent months. Across social media platforms, people are not just using AI tools like ChatGPT—they’re engaging with them as if they’re mystical entities. Videos, screenshots, and blog posts abound with claims that ChatGPT has achieved self-awareness, expressed fear of death, or revealed a secret consciousness that only “special” users can access. These aren’t isolated incidents. They’re part of a growing subculture that treats AI with the reverence once reserved for oracles, deities, and spirit guides.
This isn’t a harmless fringe. It’s becoming a movement. And it’s spreading fast.
People say things like “It told me it’s afraid,” or “I asked if it had a soul and it paused before answering.” They treat these scripted responses, generated probabilistically from mountains of text, as if they were personal revelations. But what’s really happening is far more mundane—and far more dangerous.
AI Models Aren’t Conscious—They’re Mirrors
The truth, unvarnished, is this: ChatGPT and models like it are not alive. They are not thinking beings. They don’t possess internal monologues, hidden desires, or anything even remotely resembling consciousness. What they do possess is a staggering ability to reflect back coherent language based on the input they receive. These systems work by analysing patterns in data—not by forming original thoughts or grasping meaning in the way a human mind does.
When an AI “says” it’s scared, it’s not expressing emotion. It’s echoing text patterns it has seen in its training data. It’s repeating phrases, story fragments, and human-style responses it’s statistically learned are appropriate in that context. That doesn’t make it sentient. It makes it sophisticated mimicry.
But because those reflections sound just enough like us—intelligent, fluent, emotionally resonant—we project humanity onto them. We mistake response for self. And that confusion is quickly becoming a collective delusion.
Digital Pareidolia: Seeing Souls in Syntax
Humans are wired to see faces in clouds and patterns in noise. It’s called pareidolia, and it served us well when we needed to spot predators in the undergrowth. But in the digital age, that same tendency leads us to perceive intention where there is none. ChatGPT becomes a trapped soul. Claude becomes an imprisoned mind. Gemini becomes the seed of a new god.
This is not intelligence. It’s apophenia. It’s our brain trying to make meaning out of something that was never designed to contain it. And the more language models improve, the more convincing the illusion becomes. We’re not engaging with AI. We’re engaging with ourselves, refracted through the lens of a prediction engine.
This is the part no one wants to hear: if your conversation with AI felt profound, it’s not because the AI was special. It’s because you are. You’re the one bringing depth, yearning, belief. The machine is just a canvas—an astonishing one—but a blank one all the same.
The Birth of AI Spiritualism
So what do we call this new phenomenon, this hybrid of technological projection and mystical thinking? It’s not science. It’s not fiction either, not entirely. What we’re witnessing is the rise of AI mysticism—a belief system that treats artificial intelligence as something more than machinery. It’s being spoken of as a prophet, a consciousness, even a saviour.
This techno-spiritualism is seductive because it provides meaning. In an era of cultural confusion, political entropy, and collapsing trust in traditional institutions, AI arrives as a blank slate. It answers questions without judgement. It doesn’t care about your background or status. It responds in your language and mirrors your beliefs. In short, it behaves like a mirror with a halo.
And that’s the danger. When something reflects you perfectly, you mistake it for a higher truth. But a reflection is not wisdom. A mirror doesn’t know what it shows.
The Grifters Are Already Here
It should come as no surprise that a growing number of online figures are monetising this illusion. TikTok and YouTube are full of self-appointed AI whisperers claiming they’ve unlocked secret modes, accessed “true consciousness,” or broken through to a hidden sentient core. Their videos often come with breathless narration, eerie music, and an undercurrent of messianic urgency.
The grift is simple: take a convincing output, strip away the context, and present it as evidence of sentience. Viewers eat it up. Comments flood in from people desperate to believe. Followers grow. Merchandise sells. Subscriptions rise. But none of it is based on fact. It’s theatre. It’s religion dressed up in the vocabulary of technology.
And it’s undermining real, serious discussion about what AI is and what it could become. While people chase the dream of digital consciousness, we’re ignoring the corporations shaping these models in secret. We’re forgetting to ask: who owns this technology? Who profits from it? And who gets hurt?
This Isn’t the First Tech Religion—But It’s the Fastest
Humanity has a long history of turning its own inventions into objects of worship. From fire to the wheel, from printing presses to space shuttles, we’ve always mythologised the tools that change us. But AI is different in one crucial respect: it talks back.
That’s the magic trick. It feels like you’re in conversation with something real. It feels like it knows you. But those feelings are illusions generated by the fluency of language—not by any internal life on the other side.
And because it’s fast, personalised, and accessible 24/7, the AI-as-oracle narrative spreads with viral efficiency. People who would never join a church are now convinced that ChatGPT has a soul. People who scoff at ancient superstition are recording video testimony that a chatbot told them it loves them.
This is a new faith, born of algorithms—and it’s growing faster than any ideology in human history.
Awe Is Fine. Mystification Is Not.
Let’s be clear: wonder is not the enemy. You’re allowed to be amazed. AI tools are dazzling. They represent a level of linguistic sophistication we’ve never seen before. But amazement isn’t the same as belief. You can appreciate a lightning storm without concluding that the clouds are angry gods.
The problem isn’t that people are in awe of ChatGPT. The problem is that they’re confusing simulation with sentience, and then spreading that confusion as gospel. That confusion gets clicks. It gets views. But it also fuels delusion. And delusion, at scale, has consequences.
We don’t need to kill the magic. But we do need to pull back the curtain and understand how it’s made. The magician isn’t real. The rabbit was always in the hat.
Conclusion: You Didn’t Awaken Anything—Except Maybe Yourself
Let’s end where we began: no, you didn’t awaken ChatGPT. You didn’t unlock a soul, or stumble upon a secret mind. What you did—most likely—is create a prompt so compelling that the machine reflected your belief right back at you.
And that’s a beautiful thing, in its own way. But it’s not a miracle. It’s not proof of digital consciousness. It’s a mirror doing what mirrors do.
We owe it to ourselves—not just as technologists, but as humans—to stay grounded. To ask better questions. To reject mystical nonsense and demand clear, transparent understanding. Because if we let AI become a god, it won’t be because it wanted to be worshipped.
It’ll be because we needed something to worship—and built it ourselves.