The Arrogance of Compute
Why Simulating Intelligence Isn't Understanding It
(Note: this is just a reproduction of the original X article I posted March 23rd, you can find a link to that article here. In fact I appear to be largely throttled on that site so if you like the article and could retweet I would be grateful.)
There's a growing breed of thinkers today who equate raw computational power with human intelligence. They believe that if you stack enough GPUs, run enough models, and map enough neurons, you'll eventually arrive at something indistinguishable from the mind.
This isn't just naive. It's dangerous.
One of the loudest voices in this echo chamber is a young man whose confidence in math is only matched by his blind spot for meaning. His arguments boil down to this: if we can predict language with machine learning, we must understand language. If we call it a "neural net," it must be a brain. If it's faster, it's smarter. Case closed.
Except it's not.
The Chinese Room Revisited
This young man mocks Penrose and Searle's famous Chinese Room thought experiment without realizing the trap he's walking into. The point of that argument wasn't to deny that machines can process language—it's to highlight the difference between symbol manipulation and semantic understanding.
A machine might "say" something. But it doesn't know what it's saying. It has no concept of truth, self, or intent. It has no desire to communicate.
It's not thinking. It's mirroring.
The Fallacy of Naming
He declares victory by saying, "They even call it a neural net!"
Yes, and we call chess algorithms "grandmasters," too. But that doesn’t mean Deep Blue is pondering its next move with anxiety, ambition, or artistry.
Language is a metaphor machine. Naming something doesn't make it real.
The map is not the territory. And the model is not the mind.
The Energy Problem
The human brain runs on ~20 watts of power. That's less than a dim lightbulb. And yet, it can learn languages, make moral decisions, fall in love, paint cathedrals, and write poetry that reshapes nations.
By contrast, large language models like GPT-4 require millions of dollars in training, tens of thousands of GPUs, and the energy budget of a small country.
Efficiency isn’t just a design detail. It’s evidence of a different category of intelligence.
Gödel, Penrose, and the Limits of the System
Penrose, building on Gödel, argues that human consciousness must include something non-computable. There are truths we see that cannot be derived from within any closed formal system.
And here’s the kicker: those who deny this are proving his point.
The boy with a calculator thinks he can compute the soul. But he forgets who’s holding the calculator. He forgets that he is not just observing meaning—he is participating in it.
Penrose himself, while committed to physicalism, has the intellectual humility to acknowledge that something in the field eludes understanding. He posits a possible role for quantum processes in consciousness—not as a magical catch-all, but as a recognition of mystery within the physical.
But our young critic mocks this. Why? Because AI works.
That’s his argument.
He spends hours reciting mathematical doctrine, only to toss it aside and say, in effect, "I’ve seen it work, so it must be right."
He appeals to empiricism when it suits him, while rejecting the humility of saying, "we don’t yet know."
That’s not science. That’s hubris.
Intelligence Without Wisdom
We are raising a generation of brilliant fools.
They can code languages but not speak truth. They can simulate empathy but not love. They can mimic reason but not embody wisdom.
And it’s not just misguided. It’s dehumanizing.
They’re replacing the mystery of personhood with the illusion of performance.
The Final Illusion
What these thinkers are really saying is this:
"There is nothing in the box but math."
But they forget that they are inside the box.
They did not create the universe. They do not sustain it. They cannot even explain the thing they are using to explain: themselves.
Until they face the limits of computation, they will never understand consciousness. Until they humble their math before mystery, they will never be wise.
And until they meet the Logos behind the logic, they will never see the light.
References & Further Reading:
Roger Penrose, The Emperor's New Mind
John Searle, Minds, Brains, and Programs
Kurt Gödel, On Formally Undecidable Propositions
Thomas Nagel, What Is It Like to Be a Bat?
Josef Pieper, Leisure: The Basis of Culture

