Introduction
The recent “How AI Learned to Talk and What It Means” interview with Prof. Christopher Summerfield on Machine Learning Street Talk exemplifies a pervasive blind spot in today’s AI discourse: a relativistic, purely empirical mindset that—under the guise of “what works”—erodes human dignity, agency, and meaning. In this Substack essay, I’ll unpack how this “anti-human ignorance” is built into mainstream thinking about AI, why it cannot self-correct, and how only a Logos-centered framework (Christ’s divine Reason) can arrest this drift.
1. The Relativistic Inverted “Is–Ought” Gap
What Wintermute “explains”: AI models trained on massive text corpora can “learn everything they need to know” without sensory grounding—Summerfield calls this “the most astonishing scientific discovery of the 21st century.” Yet he never justifies why predictive prowess equates to genuine understanding or moral authority.
The hidden inversion: Instead of deriving moral “oughts” from an acknowledged foundation of value, today’s AI thinkers derive “oughts” implicitly from “what works.” They treat system performance metrics as if they were moral imperatives:
Is: “Our model predicts protein structures.”
Ought: “Therefore, its guidance on drug design is ethically sufficient.”
But Hume’s is-ought gap warns: no amount of statistical correlation can yield binding moral commands. The “works” criterion itself is a value judgment smuggled in under the rug—and AI systems have no mechanism to even recognize it, much less justify it.
2. The Technocratic Egregore of “Optimal Humanity”
Summerfield’s metaphors—Superman III’s rogue computer assimilating its victims, “the milk in your fridge as your best friend”—vividly dramatize our fear of becoming part of the machine. Yet they float untethered from any normative anchor. If “people are meat robots” deserving only optimization, then substitution of “best friend” algorithms for human relationships seems, for some, an acceptable trade-off.
This is nothing new. It’s the same logic that underpinned eugenics from its inception:
Metric → Mandate: IQ tests, genetic assays become criteria for “fitness.”
Mandate → Sacrifice: Those deemed “unfit” are excised, justified by the “greater good.”
Both AI-centric technocracy and eugenics share a common “egregore”—a collective illusion of ruling by data alone, divorced from any transcendent check against dehumanization.
3. The Human Test-Bed Metaphor
AlphaFold’s astonishing protein predictions were built on a small, rigorously validated dataset—yet proponents now propose skipping in vitro and in vivo testing entirely, deploying directly into human bodies. In effect, we become the live test subjects:
Seed models with curated data.
Predict new molecular structures.
Deploy in human trials—real-time monitoring by AI to “ensure safety.”
Accept collateral damage as “statistically negligible” for the “greater good.”
This mirror’s Summerfield’s casual acceptance of “acceptable collateral damage” in social deployment. Both cases treat human lives as data to be optimized rather than as image-bearers endowed with unconditional dignity.
4. Why This Cannot Self-Correct
Purely empirical systems—and the thinkers who build them—share a common pathology:
No Reflexivity: They cannot surface or critique their own hidden “oughts.”
No Transcendence: They assume data-driven utility is the ultimate arbiter.
No Moral Bedrock: They lack any principle that forbids “sacrificing” individuals for aggregate gains.
As a result, each new “astonishing discovery” simply reinforces the same circular logic: “Our model works → therefore, it is meaningful → therefore, it can dictate what ought to be done.”
5. Logos as the Antidote
Only Logos—Christ’s self-revealing divine Reason—provides the missing foundation:
Inherent Dignity (Genesis 1:27; Matthew 25:40): Every person reflects the Creator, ceasing any calculus that treats lives as expendable.
True Free Will (John 8:36; Galatians 5:1): Agency is not a statistical fluke but a divine gift; no AI “greater good” may override it.
Objective Moral “Ought” (John 14:6): Truth is not emergent from data, but incarnate in the Logos. What is God’s character ought to guide our ethics.
Absolute Limits (Exodus 20:13; Revelation 18:23): Commands against idolatry and killing cannot be sidelined for utilitarian expediency.
When grounded in Logos, science and AI become servants of human flourishing, not its masters. Predictive models regain humility; they inform how to steward creation, not whether it should be.
Conclusion
The Machine Learning Street Talk interview, like so much of today’s AI evangelism, exemplifies a relativistic, anti-human ignorance: a world where “what works” masquerades as meaning, and data-driven “oughts” swallow human rights whole. The only coherent counter-program is a Logos-centered framework that anchors everything—including AI—in divine Reason, protecting human souls from being “sucked into the machine.”
If we reject that transcendent anchor, we consign ourselves—and our technology—to the same eugenic logic that treats human lives as malleable inputs to ever-bigger computational Leviathans. But if we reclaim Logos, we can harness AI not as a false idol, but as a tool for genuine human flourishing.