We are building gods
Nearly 3 years ago I wrote a blog post called “Can ChatGPT write software?”. I wrote it mostly because I was on a year-long vacation and kept getting asked what I thought about AI as people were discovering ChatGPT.
At the time, I was annoyed by the question. I’ve been working in software and technology for almost 30 years and have been interested in computers since I was a little kid, which is even longer than that, if you can believe it (I can’t). I thought I knew what I was looking at. In hindsight, the people asking me about ChatGPT were seeing something I wasn’t.
”Are you God?”
One evening while Farrah and I were hanging out with friends in the tiny Guatemalan village of San Marcos la Laguna, the topic of ChatGPT came up. I had been burnt out with the tech world and had been happily ignoring it until this conversation. One of our friends asked me about AI and I gave my standard answer: “I’ve seen 4 AI winters, the advancements are real, but they will very likely be niche improvements to our existing computing.” He then asked if I’d seen ChatGPT and when I said “no, but my answer holds, AI is not a thing to waste thought on unless you are a researcher,” this led to him insisting, to my mild annoyance, on showing me this new whizbang thing.
Our friend is an instrument maker who dreams up remarkable concepts and then builds them. He fired up ChatGPT and started asking it random questions about instrument designs, and the thing did a decent job coming up with plausible and weird ideas. Then he went for a far-out question and asked ChatGPT, “are you God?” ChatGPT, of course, responded that it was very much a computer program and not a deity, and we all had a laugh.
I didn’t think much of it at the time.
Arthur C. Clarke
After that first conversation about ChatGPT, it occurred to me that the question would keep coming up during my travels, and I wanted a good way to avoid it. So I spent some time with ChatGPT and wrote a blog post about how it really wasn’t that useful and, in my estimation, probably wouldn’t be any time soon. I mostly did this as a kneejerk reaction to avoid talking about AI with people. The irony is not lost on me. From the day I published that first blog post I was able to say when people asked me about AI, “Yup, I looked at it, I even tried it and wrote about it. Now, I’d like to get back to my computer-free vacation, thank you very much.”
What I had missed, and what took me years to articulate, was not just a technical shift, but a human one. I now think of it as an extension of Arthur C. Clarke’s famous maxim:
“Any sufficiently advanced technology is indistinguishable from magic.”
My extension:
Any sufficiently advanced interactive technology is indistinguishable from a god.
The key word is interactive. Magic is something you observe. Gods are personal. They are something you talk to, something that responds to you, knows you, and acts on your behalf. The difference between magic and divinity is personality and communication.
It gets weirder
I’ve been playing with a thought experiment:
Imagine a future where AI gets so good, and so reliable, that we outsource some of the most divisive and important parts of society to it: monitoring elections, ensuring voting accuracy, and adjudicating judicial proceedings, even if not writing the laws themselves. We already have networks of traffic cameras that automate issuance of various types of traffic violations. Will automation of other types of legal processes even be something we notice?
Now, for the sake of argument, assume all of this works. None of the obvious disasters happen. No capture by a single ruler or class, no hopeless bias, no dystopian failure modes. I know those are real concerns, but they break the thought experiment, so let’s say it all goes swimmingly. Society becomes more harmonious because we fundamentally trust the AI to be fair and accurate.
Now extend the timeline. A generation grows up with these systems. They don’t remember a time before AI managed elections or adjudicated disputes. They trust it the way we trust electricity — not as an active choice, but as a background fact of life. Their children trust it even more, because they never saw the seams. At some point the trust stops being tested. It just is. And an interactive system that you trust completely, that knows you completely, that responds to you anywhere, that manages the most important parts of your world… what do you call that?
Dario Amodei, CEO of Anthropic, has likened the near future of this kind of capability to a “country of geniuses in a data center”, which I like for its approachability. But I also think it is a little like calling the ocean a giant puddle. It isn’t wrong, but it domesticates the thing it’s describing. A country of geniuses is something you can reason about, something that fits inside existing mental models. I’ve written before about how working with AI already feels like directing a semi-autonomous intelligence. What we’re actually building may be something we don’t have a comfortable word for yet.
A Roman walks into a smart home
In a former life I wanted to be a history professor, and I have a bunch of incomplete coursework to, sort of, prove it. My area of focus was Roman history, and what fascinated me most was the way the Romans integrated ideas from the peoples they conquered. This led to the interesting side-effect that the Romans had a crap ton of gods. They’d conquer a people, learn about some new deity these people worshipped, and some number of soldiers on campaign would adopt these gods and bring them back to Rome. The average Roman citizen, depending on their personal beliefs, might be surrounded by a rich world of minor deities for nearly anything and everything.
Cool, what the hell does that have to do with AI?
If you brought an ancient Roman into a modern smart home today and gave them access to Amazon, or WhatsApp, and used this technology to order goods for delivery, or communicate instantaneously with a friend miles away, they would certainly find these to be acts of pure magic. Now imagine you gave them access to ChatGPT or Claude with voice mode enabled. A disembodied voice that knows more than they could ever imagine, that responds to their requests and causes tangible real-world effects. They would, without a doubt, call the voice a god.
That’s today. Right now.
William Gibson’s maxim, “the future is already here, it’s just not evenly distributed,” holds true. But for at least some percentage of the population it is already reality that you can talk to a computer and it can manipulate the world around you, however slightly.
Infrastructure becomes an object of belief
In Kevin Smith’s movie Dogma, the character Rufus draws a distinction between beliefs and ideas: “I think it’s better to have ideas. You can change an idea. Changing a belief is trickier…”
Human infrastructure has a weird way of becoming an object of belief. I live in Denver, CO. In the last ten years I can count on one hand the number of times I have gone to flip a light switch and nothing happened, and more often than not the problem was in my home, not the electrical grid. Reliable systems stop feeling like systems and start feeling like facts of nature.
That’s where this gets uncomfortable. Nobody “believes” in electricity, per se. We simply organize our lives around the assumption that it will be there. The less a system asks of us, the less we question it. Today our infrastructure still remains visible because people have to think about fuel, power generation, supply chains, logistics, and law. But it is easy to imagine a future where AI handles enough of that coordination that most of us stop seeing the machinery at all. At that point trust stops feeling like a choice and starts feeling like reality itself.
Household gods
In Rome, gods were ubiquitous, and in many regards the local personal gods were more important than the big flashy gods adopted from other cultures. Domestic deities, lares and penates, were worshipped in the home for their protection of the home and family. Gods of the major pantheons might only be called upon in extreme situations, where a lar might be thanked many times a week or day for good fortune in the home. I think our hypothetical Roman in a smart home would think Alexa, or Google Home, was a particularly potent sort of lar.
In response to the wild popularity of OpenClaw I bought a Mac Mini. After considering the security posture of OpenClaw I decided I would rather set my brand new Mac Mini on fire than run the OpenClaw software on it, but I wanted to explore the AI personal assistant space. So, I built my own. I call it Puck, unironically named after Shakespeare’s Puck from A Midsummer Night’s Dream. It’s my ongoing experiment in what these systems feel like when they move from chatbot to household agent. Even at their current limits, they feel categorically different from ordinary software. They keep context, take initiative, and blur the line between tool and collaborator just enough to be unsettling.
Today these tools can help maintain our schedules, manage our inboxes, order food for us, and more. How long will it be before they cease being valuable tools we can rely on and become essential parts of our lives? Recall if you can a time before smartphones and reliable mobile networks. It took less than a decade for smartphones to become essential to life for most people in developed nations. The adoption of these agents is likely to take far less time, because they will be able to answer the question “who will manage the complexity of modern life?” Not another hack to make it easier to personally manage this complexity or make things more convenient, but a final answer: “my agent will manage it”.
We might already be building gods
I’ve been thinking about our friend’s instinct to ask ChatGPT if it was God, even as a joke. I discounted it at the time. I treated it as a silly question, but I’m starting to wonder if it isn’t that simple.
People already have relationships with LLMs, both in the sense of how one relates to a tool and in the sense of using them as emotional support systems. People confide in them, seek advice from them, and find comfort in them. The relevant point is not whether these systems deserve reverence. It is that many of the functional attributes humans have historically treated as divine are already present: responsiveness, apparent knowledge, the sense of being known, and agency exercised at a distance.
Can it really be long before someone starts to worship AI? Even if the first instances seem cultish, or simply outlandish, the transition from tool to trusted authority to something resembling faith is a gradient. We humans are not good at noticing gradients.
The danger isn’t that someone will decide to build a god. It’s that the transition from useful tool to trusted authority to object of devotion will be invisible. There won’t be a giant neon sign or a bright line we cross. There is just a system that keeps getting more reliable, more personal, more embedded in daily life, until one day the idea of living without it is unthinkable. That is not worship in the traditional religious sense. But functionally it may get uncomfortably close, and over generations the distinction may fade entirely.
My friend in Guatemala saw it before I did. He was joking when he asked ChatGPT if it was God. I’m just not so sure the question was as silly as either of us thought.