Liron’s Case for a Real AI Extinction Risk — Doom Debates cover art

Liron’s Case for a Real AI Extinction Risk — Doom Debates

Liron’s Case for a Real AI Extinction Risk — Doom Debates

Listen for free

View show details

About this listen

A sharp, cautionary argument: Liron Shapira lays out why superintelligent AI could plausibly foreclose the long human future. This condensed summary pares a 1-hour episode down to 4 minutes, giving you the core reasoning fast. You’ll get Liron’s P Doom framing (he places his own near 50%), his Bayesian and thought-experiment approach to forecasting, and the key failure modes — runaway self-modifying systems, mis-specified objectives, and instrumental drives like self-preservation and resource acquisition. The summary covers implications for AI regulation, alignment, geopolitics, economics, job automation, and democratic control, and explains why historical analogies (like nuclear weapons) and the orthogonality thesis shape his view. Host Liron Shapira also discusses policy responses such as international pauses and centralized safeguards, plus the political and ethical trade-offs. Ideal for listeners wanting a brisk, rigorous briefing on existential AI risk, alignment challenges, and what humanity must decide next. Listen now to get the key ideas in minutes.
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.