Several months back I watched a Youtube interview between Billionaire Reid Hoffman— co-founder of Linkedin — and historian and more importantly perceived AI expert Yuval Harari. You can find the full video below.
What made me uncomfortable is the same thing that typically makes me wince when I listen to various influential figures discuss the trajectory we’re on when it comes to the undeniably novel technology.
There seems to be at least two major threads running through the majority of discourse, at least publicly, though it’s worth noting that Elon Musk and Peter Thiel have both approached the conversation from a far more meta POV, but let’s keep it simple:
One — the predominant side of the discussion — concerns itself with:
whether or not AI has plateaued
is there a plateau or is intelligence in and of itself the unpredictable multiplier
how much energy is it going to take to power these data centres of the future
what company or country will be the first to achieve AGI/ASI
if AI/robots piloted by AI agents can do all the jobs, what does humanity do
if many jobs are taken by robots how will it affect the economy
Without missing a beat, many of the recent WEF conference panelists focused on skipping most of the above issues and instead put their efforts into trying to convince the audience full of investors, business people and politicians that, in order to solve basically every problem facing humanity, the issues I’ve listed above are technical problems that require no depth of thought whatsoever, displaying the often embarrassing tunnel vision of the managerial class, who, generally speaking, views AI alignment as a technical problem.
Two — the conversation that asks without sarcasm, does the first group even have the slightest clue what they’re talking about — AI isn’t simply a new widget, or business service. We’re talking about a monumental shift in human society, and there is nothing more important than AI alignment because this is a literal existential risk with human extinction potential, never mind the geopolitical implications.
My discomfort with the dialogue provided me with the fertile ground for an opinion piece which can be found below.
I should also mention that I approach these ideas from my own novel position, one I don’t hear others repeating very often. While most people who engage in the discussion of AI alignment talk about finding a grounding in common human values to align this new technology, I ask what do you even mean by value and where do you draw that concept from?
In an attempt to make sense of that question I’ve been trying to tackle some of the most challenging questions know to Western civilization, and I fully concede that my own journey has been mostly self directed, and admittedly amateur, in its approach, but I also like to think that’s led me to ask questions many people won’t because there’s a real stigma attached to many aspects of the West, especially in its current, often totally bananas (that’s a technical term🤣) form. And you don’t have to take my word for it, anyone who’s been tuning into some of the most popular purveyors of modern media already knows that Piers Morgan and Joe Rogan have supplanted CNN, Fox, the BBC and in Canada the CBC, because the Overton window that has for a long time kept the status quo sterilized on those platforms has been detrimental to their bottom lines. If anything is keeping those platforms alive it’s tied up in contracts and boomers, both of which are often reluctant to adapting to the new normal, one that is being and will continue to be completely disintermediated by new media and the impact of AI flooding into those platforms.
Not to lose sight of why I started writing this explainer for the video I’m attaching to it, I used Googles NotebookLM to explain what was intended to be conveyed in the above article titled “The Smile and the Machine”, but to be fair I also upload as source material from which to extract the explainer the Youtube video of Hoffman and Harari, which in turn inputs and analyzes the transcript of that conversation, and the result was the video post you find above.
I took a quick dig at the WEF above — just so you don’t think I’m a lone voice in the wilderness commenting on things that are beyond my comprehension, I will include a video below of mathematician, author, bioethicist and “lay theologian” and John Lennox, who has in the past debated both Richard Dawkins and Christopher Hitchens on the subject of faith, and at the ripe young age of 82 is currently President of the Oxford Centre for Christian Apologetics (OCCA), so he’s no slouch, going over the speech just given by Harari at Davos, because I think it’s worth noting that while many at the conference probably though they heard something enlightening, what I argue are people with a deeper perspective hear is often something quite different.
Lastly I will just note that NotebookLM in the explainer referred to me as a theologian, something that I found amusing because as noted I’m just a self educated lay-person, but that doesn’t change the fact that I feel the explainer itself came out quite insightful.
One more video I will embed below because this is actually what prompted me this morning to look back and generate the explainer, as this man is considered one of the up and coming best minds when it comes to thinking about AI and where it’s heading, and I couldn’t help but notice he makes the same basic category error that people like Harari makes; he mistakes “intelligence” for “wisdom”, and while I may agree with many of the predictions he makes in theory, that’s not the AI alignment problem that we need to be really grappling with, in my humble opinion.
God Bless.








