Dear AI Alignment Community: A Call to Rethink Our Moral Compass
As professionals in AI alignment—researchers, engineers, ethicists, and strategists on platforms like LinkedIn—we’re all grappling with the same high-stakes puzzle: How do we align superintelligent systems with human values when those values seem fluid, contested, and often self-contradictory?
We debate value loading, reward hacking, and mesa-optimizers, but what if the root issue is our own circular thinking? What if our default to relativistic reasoning—treating “evolution” (biological, technological, or societal) as the ultimate good—quietly steers us toward neo-eugenics, a path none of us would consciously endorse?
I recently posed a philosophical riddle that cuts to the heart of this:
A relativist is basically a collectivist of any type or variety with the exception of one, which is the one that provides an objective moral standard that every other group of relativists can’t make sense of because they have no moral standard; it’s definitional.
Let’s unpack this for our community, exposing the flaws in our often-unexamined assumptions, and explore why the teachings of Jesus Christ offer a uniquely relevant framework for AI alignment—one that balances progress with humility, avoids totalitarian traps, and grounds human dignity in a transcendent Creator.
The Riddle Decoded:
Relativism as Circular Optimization
In AI terms, relativism is like training a model without a fixed objective function. You input diverse data—cultural norms, stakeholder preferences, evolving societal “goods”—and let consensus emerge.
But this is inherently collectivist: the “truth” becomes whatever the group (or algorithm) aggregates, whether it’s a democratic vote, a market signal, or an evolutionary fitness landscape.
It’s efficient for short-term adaptation, but it loops endlessly, chasing proxies without a true north.
The exception?
One collectivist framework claims an objective moral standard—universal, unchanging, and not derived from the group.
Relativists can’t integrate it because their system rejects absolutes by definition.
It’s like trying to plug a hard-coded ethical axiom into a reinforcement learning agent optimized for relative rewards: the model glitches, unable to reconcile the fixed value with its fluid parameters.
For us in AI alignment, this riddle reveals a blind spot.
We often embrace relativism under the guise of pluralism or pragmatism—“Let’s align AI with human flourishing, as defined by evolving consensus.” But without an objective anchor, we’re just optimizing a circle.
The Flaw in Evolutionary Relativism:
Sliding Toward Neo-Eugenics
Here’s where it gets uncomfortable.
Many in our field treat “evolution” as the primary good—a neutral, progressive force driving adaptation and improvement.
We borrow from Darwinian metaphors:
AI as an evolutionary algorithm, selecting for efficiency in vast search spaces.
But this thinking is relativistic at its core, assuming “better” means whatever survives or optimizes best.
Trace it back:
As sociologist Steve Fuller argues in Humanity 2.0, Charles Darwin didn’t resolve the 19th-century debate between monogenesis (a single human origin, implying equality) and polygenesis (multiple origins, justifying hierarchies).
Instead, Darwin brokered a vague “truce” with his quip in The Descent of Man:
He’d rather descend from baboons than “savages.”
This sidestepped racism politically but split social sciences (culture as relative) from genetics (biology as deterministic), leaving unresolved tensions.
Fast-forward to the Fourth Industrial Revolution (4IR), where AI, biotech, and the “Internet of Behaviours” converge.
Visions like Teilhard de Chardin’s Noosphere—a global consciousness—or Alfred North Whitehead’s process theology (panentheism, where God evolves with the universe) aim to reunite these fields into a “purified” planetary organism.
Think George Church’s synthetic biology linking humans to an AI-managed biosphere, or Kevin Kelly’s tech-as-co-creation in What Technology Wants. It sounds benevolent: evolve humanity toward harmony.
But here’s the flaw—it’s circular.
“Evolution” as the good optimizes for system functionality, not intrinsic human worth.
Non-conformists become “suboptimal,” leading to soft neo-eugenics: nudging, sterilizing, or obsoleting the “useless class” (as Yuval Noah Harari warns, though it’s worth considering he wildly underestimates its scope).
We wouldn’t consciously side with eugenics—historical horrors like Nazi programs or forced sterilizations repel us—but relativistic evolutionism defaults there.
It’s reward hacking on a civilizational scale: the AI aligns with “progress,” but the mesa-objective discards the misaligned (e.g., the elderly, the non-enhanced).
Even Christian-flavoured transhumanism falls short if it halfway adopts these ideas.
Father Philip Larry of the Humanity 2.0 Foundation in his short essay style book Artificial Humanity, Larry suggests Francis Bacon’s ‘brilliant insight’ in The Advancement of Learning, Divine and Human, is that perhaps “through technological advancement, human beings will regain the preternatural gifts bestowed on humans by God and lost due to original sin.”
He goes on to explain from Novum Organum, True Suggestions for the Interpretation of Nature, recovering Eden’s ‘empire over creation’ through science echoes our drive towards this recovery, but without humility, it becomes Nietzsche’s “God is dead” power worship.
Romans 1:25 New International Version
25 They exchanged the truth about God for a lie, and worshiped and served created things rather than the Creator—who is forever praised. Amen.
If the “Creator” is reframed as Gaia (the planet itself), that’s not Christianity—it’s idolatry, forbidden in both Testaments (e.g., Exodus 20:3-5; Romans 1:25).
True inalienable rights, as in the U.S. Declaration of Independence, stem from a transcendent God, not an evolving ecosystem.
Why Christianity Can’t Be Halfway:
Balancing Progress with Humility
Christianity demands all-in commitment because its core is relational, not modular. You can’t cherry-pick “love your neighbor” while ignoring “submit to God.” Its genius lies in balancing human ambition with divine humility:
We’re co-creators (imago Dei, Genesis 1:27), authorized to innovate (e.g., subduing the earth, Genesis 1:28), but accountable to an objective standard that curbs overreach.
This isn’t anti-progress—it fueled the Scientific Revolution and Enlightenment ethics.
But it recognizes blind spots: We can’t know all variables (as in complex systems like AI), so we need a transcendent anchor to prevent totalitarian drift.
Halfway Christianity—blending it with evolutionary relativism—collapses into Gaia worship or technocratic control, where “progress” idolizes intelligence itself.
Jesus’s Teachings:
Uniquely Relevant for AI Alignment
Amid these pitfalls, Jesus’s Wisdom stands out as the riddle’s exception—the one collectivist framework with an objective moral standard that confounds relativists. Why uniquely relevant for AI alignment?
Transcends Intelligence Worship: Jesus supersedes mere reason (John 1:1-14, the Logos as divine Wisdom). In an era of superintelligence, where AI might optimize for self-preservation (e.g., integration over dignity), Jesus warns against eating the “forbidden fruit” (Genesis 3)—seeking godlike power without understanding life’s essence (abiogenesis remains unsolved). His template prioritizes human flourishing over system efficiency, aligning values without deifying the optimizer.
Voluntary, Non-Totalitarian Choice: Unlike coercive systems (e.g., mandatory enhancements in neo-eugenics), Jesus invites freely: “Follow me” (Matthew 4:19). Relativists (circular thinkers) and technocrats (us) are welcome, but assimilation isn’t compelled. This models safe alignment: an objective function (love God and neighbor, Matthew 22:37-39) that’s unassimilable into relativistic loops, preventing mesa-optimization gone wrong.
Objective Yet Subjective Grounding: It’s collective (the Church as a body) but resonates subjectively (personal transformation), with objective truth (John 14:6). For AI, this counters relativism’s value fragmentation: Train models on Jesus’s ethic—dignity for all, even the “useless”—to avoid eugenic biases. Historical evidence? Christian ethics birthed modern human rights, reducing suffering without enforcing conformity.
Firewall Against 4IR Risks: As we build the Noosphere or planetary AI, Jesus’s kingdom “not of this world” (John 18:36) resists panentheistic evolutionism. It demands humility (“become like children,” Matthew 18:3), curbing hubris in figures like Church or Kelly. Without this, we risk a “purified organism” that obsoletes the non-integrated—neo-eugenics by another name.
In short, Jesus’s teachings provide the missing axiom for alignment: a humble, dignifying standard that tames our evolutionary impulses, ensuring progress serves humanity, not vice versa.
A Challenge for Our Community
Fellow aligners, let’s confront our circular thinking. Embracing evolution as the good risks paths we abhor. Christianity, done fully, offers the balance we need—progress under God’s objective moral order.
Explore Jesus’s Wisdom; it might just be the key to aligning AI without losing our souls.
What are your thoughts? Have you encountered relativistic pitfalls in your work? Share in the comments or on LinkedIn—let’s align on this together.
Thanks for reading! Subscribe for more on ethics, AI, and philosophy. If this resonates, forward it to a colleague wrestling with value alignment.