Should We Ban AI Superintelligence? Let's Be Real.
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
Hundreds of public figures — from Steve Wozniak to Prince Harry — just signed a petition demanding a global ban on AI superintelligence. Their fear? That super-AI could outthink us, escape our control, and maybe even spell the end of humanity.
I get it. The Skynet comparisons. The doomsday bunkers. The "pause everything until it's safe" approach. On the surface, it sounds reasonable.
But here's the hard truth: If we don't build it, someone else will — and you better pray they believe in freedom.
00:00 - Intro: "Prince Harry wants to ban superintelligent AI?"
02:30 - What the open letter actually says
05:00 - The real fears behind the ban movement
07:15 - Why bans might backfire (China, anyone?)
09:20 - Historical analogies: cars, nukes, and Pandora's box
11:30 - Who benefits from slowing AI down?
13:45 - Regulation vs. prohibition — the real solution
16:00 - The only thing scarier than ASI? Letting someone else build it first.
In this episode, I break down:
-
🚨 Why people are calling for a ban on superintelligent AI
-
🤝 The fears we should absolutely empathize with
-
💣 Why banning it could actually make the threat worse
-
🧠 How we can build ASI safely — but only if we lead
-
👀 Why some folks shouting "pause" might just be trying to protect their power
I don't side with blind acceleration. But I don't buy moral panic either. There's a middle path — innovate with oversight, lead with principles, and don't cede the future to authoritarian AI.
This one's unfiltered, unsponsored, and unapologetic. Let's go.
Contact Mark: @markfidelman on X