AI Safety for Who? cover art

AI Safety for Who?

AI Safety for Who?

Listen for free

View show details

About this listen

Jacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF, anthropomorphize AI systems such they take advantage of our instincts as social beings. At the same time, Big Tech companies push these systems for "wellness" while dodging healthcare liability, causing real harms today We discuss what actual safety would look like, drawing on self-driving car regulations.Chapters(00:00) - Introduction & AI Investment Insanity (01:43) - The Problem with AI Safety (08:16) - Anthropomorphizing AI & Its Dangers (26:55) - Mental Health, Wellness, and AI (39:15) - Censorship, Bias, and Dual Use (44:42) - Solutions, Community Action & Final ThoughtsLinksAI Ethics & PhilosophyForeign affairs article - The Cost of the AGI DelusionNature article - Principles alone cannot guarantee ethical AIXeiaso blog post - Who Do Assistants Serve?Argmin article - The Banal Evil of AI SafetyAI Panic News article - The Rationality TrapAI Model Bias, Failures, and ImpactsBBC news article - AI Image Generation IssuesThe New York Times article - Google Gemini German Uniforms ControversyThe Verge article - Google Gemini's Embarrassing AI PicturesNPR article - Grok, Elon Musk, and Antisemitic/Racist ContentAccelerAId blog post - How AI Nudges are Transforming Up-and Cross-SellingAI Took My Job websiteAI Mental Health & Safety ConcernsEuronews article - AI Chatbot TragedyPopular Mechanics article - OpenAI and PsychosisPsychology Today article - The Emerging Problem of AI PsychosisRolling Stone article - AI Spiritual Delusions Destroying Human RelationshipsThe New York Times article - AI Chatbots and DelusionsGuidelines, Governance, and CensorshipPreprint - R1dacted: Investigating Local Censorship in DeepSeek's R1 Language ModelMinds & Machines article - The Ethics of AI Ethics: An Evaluation of GuidelinesSSRN paper - Instrument Choice in AI GovernanceAnthropic announcement - Claude Gov Models for U.S. National Security CustomersAnthropic documentation - Claude's ConstitutionReuters investigation - Meta AI Chatbot GuidelinesSwiss Federal Council consultation - Swiss AI Consultation ProceduresGrok Prompts Github RepoSimon Willison blog post - Grok 4 Heavy
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.