Doom Debates — Liron Shapira on AGI Timelines, Cascading Risks, and What to Do cover art

Doom Debates — Liron Shapira on AGI Timelines, Cascading Risks, and What to Do

Doom Debates — Liron Shapira on AGI Timelines, Cascading Risks, and What to Do

Listen for free

View show details

About this listen

Hook: Liron Shapira cuts through the noise to explain why a capability threshold could cascade into existential danger and why uncertainty alone demands urgent action. This summary condenses a 3-hour Doom Debates Q&A into 6 minutes, spotlighting AI alignment, AGI timelines, and policy-ready strategies. Host Liron Shapira (guesting as the expert) lays out his candid estimate—20% chance of surviving to 2050—and explains how optimization can produce agent-like behavior, why defenders may not reliably beat attackers, and how reinforcement-trained agents and social-engineering campaigns raise near-term risks. Listeners will learn the mechanisms behind misgeneralization, practical political steps (movement-building, contacting representatives, supporting research), and which organizations and tactics amplify public pressure on AI regulation and safety. Keywords: AGI timelines, AI safety, AI alignment, AI regulation, existential risk, agents, capability threshold. Listen now to get the key ideas in minutes.
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.