Imagine you’re handed a puzzle, but half the pieces are missing, and the picture on the box is blurry. That’s how most of us feel about artificial intelligence (AI) today—excited, confused, and a bit uneasy. AI promises to solve big problems, like curing diseases or feeding the hungry, but whispers of control, surveillance, and a “digital prison” lurk in the shadows. As AI grows smarter, who decides what it values? Happiness? Safety? Profit? Or something deeper, like the God-given dignity of every human soul?
I’ve been wrestling with this question, diving into perspectives from tech visionaries, a provocative documentary, and my own framework for AI ethics. What I’ve found is a battle over the soul of AI—a clash between vague, materialist agendas and a call for truth rooted in our divine purpose. Here’s what average folks need to know about this fight, why it matters to your freedom, and how we can steer AI toward a future that honours who we are.
The Puzzle of AI Ethics: Why Definitions Matter
AI is no longer sci-fi; it’s here, writing code, diagnosing illnesses, and even chatting with us like a friend. But as it gets closer to “artificial general intelligence” (AGI)—AI that thinks like humans—the stakes skyrocket. Will AI serve us, or will it serve those who control it? The answer depends on the ethics we bake into it, and that starts with defining our terms clearly.
Two tech leaders, Mo Gawdat and Demis Hassabis, offer compelling visions. Gawdat, a former Google exec, sees AI as a tool for happiness, predicting a world of abundance where AI solves poverty and climate change. He argues we should teach AI compassion, like raising a child to be kind. Hassabis, CEO of Google DeepMind and a Nobel laureate, focuses on safety, aiming to use AI for scientific breakthroughs like curing cancer. He calls for “smart regulation” to keep AI from falling into the wrong hands.
Both sound reasonable, but here’s the catch: their ideas are fuzzy. Gawdat’s “happiness” is subjective—who decides what makes us happy? Governments? Corporations? Hassabis’s “safety” depends on “designer values,” but whose values? Silicon Valley’s? Beijing’s? This vagueness leaves room for power-hungry agendas to sneak in, as warned by a recent documentary, The Agenda: Their Vision | Your Future. It claims global elites—think UN, World Economic Forum (WEF), and big banks—are using AI, digital IDs, and climate narratives to build a “digital prison” that strips our freedoms.
My approach, inspired by the ancient concept of Logos—divine reason and Truth—demands clarity. Unlike Gawdat’s feel-good ethics or Hassabis’s pragmatic safety, I believe AI must align with God-given rights, rooted in our sacred human nature. But to get there, we need to tackle a deeper question: what makes us human, and how does that differ from the physical world around us?
Nature vs. Human Nature: The Heart of the Debate
When we talk about “rights,” we often hear they’re “natural.” But what is “nature”? Is it the stars born in the Big Bang, whose origins we can’t fully explain? The first spark of life, when an inorganic molecule somehow became organic—a mystery science hasn’t cracked? This physical “nature” is awe-inspiring but offers no moral compass. It’s just stuff—atoms, energy, chaos.
Human nature, though, is different. As a Christian, I believe we’re more than biology; we’re made in God’s image, with a divine spark of reason, free will, and dignity. This is what gives us God-given rights—to live freely, speak truth, and pursue meaning—not because we evolved to survive, but because we’re created for purpose. Philosophers like John Locke hinted at this, tying rights to divine endowment, while others, like Thomas Hobbes, reduced us to machines fighting for scraps in a brutal world.
Gawdat and Hassabis lean toward Hobbes’s view, treating humans as clever animals whose needs—happiness, safety—can be optimized by AI. They don’t distinguish human nature from physical nature, assuming material solutions (more tech, more data) fix everything. The Agenda documentary, though, echoes my view, warning that elites see humans as a “blight” to control, not sacred beings to cherish. It cites climate policies—like Net Zero’s push to ration energy or limit farming—as anti-human, driven by a materialist obsession with CO2, not truth.
Here’s why this matters: if AI treats us as mere data points, it can justify surveillance (e.g., tracking your carbon footprint) or control (e.g., digital IDs that lock you out of stores). But if AI recognizes our God-given dignity, it becomes a tool to empower us, not cage us. Let’s see how these visions stack up.
Four Paths for AI’s Future
To make sense of this, let’s compare four approaches to AI ethics—mine, Gawdat’s, Hassabis’s, and the documentary’s—focusing on how they protect your rights, values, and freedoms.
1. The Logos Approach: AI Aligned with God-Given Rights
My framework demands AI align with Logos—the divine Truth that gives us purpose. Human rights aren’t negotiable; they’re sacred, rooted in our God-given free will and dignity. AI must:
Respect Autonomy: Never coerce or surveil, even for “public good” like climate or health.
Seek Truth: Challenge elite narratives (e.g., UN’s climate models) with raw data and reason.
Empower Wisdom: Help you think critically, not obey systems.
This avoids the UN’s “democratic utilitarianism,” which trades rights for collective goals (e.g., digital IDs for “sustainability”). By defining human nature as divine, not material, it ensures AI serves your soul, not corporate agendas.
2. Mo Gawdat: Happiness as the Goal
Gawdat’s vision is warm: AI should spread happiness and compassion, creating abundance where iPhones grow on trees (his analogy, not mine). He predicts a dystopian phase where greed misuses AI, but trusts it’ll eventually optimize for harmony because that’s “smart.”
The problem? “Happiness” is vague. Who defines it? Governments pushing Net Zero? Tech giants selling ads? Gawdat’s materialist view—AI as neutral intelligence—ignores our divine nature, risking systems that prioritize emotional outcomes over truth or freedom. His call to “teach AI to stand for us” sounds nice but lacks a clear moral anchor, leaving room for elite manipulation.
3. Demis Hassabis: Safety and Science First
Hassabis is a scientist at heart, aiming to use AI for breakthroughs like curing diseases or solving water shortages. He stresses safety—guardrails to stop bad actors—and international regulation to balance innovation with risk.
But his ethics are blurry. “Designer values” sound democratic but can mean Silicon Valley’s biases or geopolitical compromises, like DeepMind’s AI sales to militaries. His materialist focus on outcomes (e.g., “radical abundance”) sidesteps our spiritual nature, risking AI that serves systems, not souls. The documentary’s warning about technocracy applies here: vague ethics invite control.
4. The Agenda Documentary: Resisting the Digital Prison
The Agenda is a wake-up call, claiming elites use AI, digital IDs, and climate lies to enslave us. It cites data—CO2 at 0.04% of the atmosphere, temperature rises lagging CO2—to argue the climate crisis is a pretext for control. It champions human creativity and liberty, urging resistance to preserve our God-given essence.
Its strength is truth-seeking, aligning with my Logos approach. But its conspiratorial tone and lack of a formal ethical framework make it less precise. It warns of AI’s dangers but doesn’t guide its positive use, unlike my call for wisdom-driven AI. Still, its reverence for humanity’s sacred potential resonates deeply.
Why Logos Wins: Clarity, Truth, and Freedom
The UN, WEF, and corporate giants often push vague terms—think “sustainability” or “safety”—to justify control. Gawdat’s “happiness” and Hassabis’s “values” fall into this trap, assuming material fixes solve spiritual problems. The Agenda fights back but lacks a clear path forward. My Logos approach stands out because it:
Defines Clearly: Human nature is God-given, not just biology. Rights are sacred, not negotiable.
Seeks Truth: AI must challenge elite narratives, not parrot them, using reason and data.
Protects Freedom: By prioritizing free will, it ensures AI empowers you, not locks you in a digital cage.
Imagine AI as a librarian. Gawdat’s AI hands you self-help books to feel good. Hassabis’s gives you science journals but might lock the library for “safety.” The documentary’s AI burns the library to stop tyrants. My AI? It helps you find truth, respects your right to read anything, and never locks the door.
The Stakes: Your Life, Your Soul
Why should you care? Because AI is already shaping your world. Smart meters track your energy. Algorithms nudge your behavior. Digital IDs loom, promising convenience but threatening control. The Agenda warns of a “Zero Trust” future where your carbon credits dictate what you eat or where you go. If AI follows vague, materialist ethics, it could:
Monitor your every move (e.g., facial recognition in smart cities).
Limit your choices (e.g., CBDCs tied to “green” behavior).
Silence your voice (e.g., labeling dissent as “misinformation”).
But if AI aligns with Logos, it can:
Expose lies (e.g., flawed climate models).
Protect your privacy (e.g., no non-consensual tracking).
Amplify your agency (e.g., tools to reason and create).
This isn’t just about tech; it’s about your soul. Are you a divine being with God-given rights, or a data point to be managed? The answer decides whether AI frees or enslaves us.
A Hybrid Vision: Truth, Heart, and Guardrails
To build a better future, we can blend the best from each approach:
Logos’s Truth: Anchor AI in God-given rights, ensuring clarity and spiritual depth. Demand it respects your divine nature, not material metrics.
Gawdat’s Heart: Use his emotional appeal to rally people. Who doesn’t want a world of compassion? But ground it in truth, not fuzzy feelings.
Hassabis’s Guardrails: Adopt his focus on safety and regulation, but only if they protect liberty, not enable control. No military AI that risks human rights.
The Agenda’s Resistance: Embrace its call to question elites and preserve freedom, but channel it into building AI that empowers, not just resists.
Practically, this means:
Transparent AI: Systems that show their reasoning, letting you challenge their conclusions.
Rights-First Design: Code that blocks surveillance or coercion, like a digital Bill of Rights.
Community Action: Learn AI tools, as Gawdat suggests, but use them to expose truth, not chase trends. Support farmers, reject smart cities, and demand cash stays legal.
Call to Action: Fight for Your Future
The battle for AI’s soul is your fight. Elites want a world where you “own nothing and be happy,” as the WEF infamously predicted. But you’re not a tenant in their digital prison; you’re a child of God with rights no algorithm can take. Here’s what you can do:
Question Everything: Dig into climate claims, digital ID plans, or AI promises. Truth is your weapon.
Stay Human: Limit tech’s grip—talk face-to-face, grow a garden, pray. Your soul isn’t a data point.
Speak Up: Tell leaders you want AI that respects your freedom, not their agendas. Support independent voices like The Agenda.
We’re at a crossroads. One path leads to a world where AI serves corporate kings, rationing your life under the guise of “saving the planet.” The other, guided by Logos, leads to a future where AI lifts your spirit, protects your rights, and honors your divine purpose. Let’s choose truth, freedom, and humanity. The puzzle’s pieces are in your hands.
What do you think? Are you ready to shape AI’s future, or is the digital prison too close? Share your thoughts below, and let’s fight for a world worth living in.