Growing BlueDot's Impact w/ Li-Lian Ang cover art

Growing BlueDot's Impact w/ Li-Lian Ang

Growing BlueDot's Impact w/ Li-Lian Ang

Listen for free

View show details

About this listen

I'm joined by my good friend, Li-Lian Ang, first hire and product manager at BlueDot Impact. We discuss how BlueDot has evolved from their original course offerings to a new "defense-in-depth" approach, which focuses on three core threat models: reduced oversight in high risk scenarios (e.g. accelerated warfare), catastrophic terrorism (e.g. rogue actors with bioweapons), and the concentration of wealth and power (e.g. supercharged surveillance states). On top of that, we cover how BlueDot's strategies account for and reduce the negative impacts of common issues in AI safety, including exclusionary tendencies, elitism, and echo chambers.2025.09.15: Learn more about how to make design effective interventions to make AI go well and potentially even get funded for it on BlueDot Impact's AGI Strategy course! BlueDot is also hiring, so if you think you’d be a good fit, I definitely recommend applying; I had a great experience when I contracted as a course facilitator. If you do end up applying, let them know you found out about the opportunity from the podcast!Follow Li-Lian on LinkedIn, and look at more of her work on her blog!As part of my effort to make this whole podcasting thing more sustainable, I have created a Kairos.fm Patreon which includes an extended version of this episode. Supporting gets you access to these extended cuts, as well as other perks in development.(03:23) - Meeting Through the Course (05:46) - Eating Your Own Dog Food (13:13) - Impact Acceleration (22:13) - Breaking Out of the AI Safety Mold (26:06) - Bluedot’s Risk Framework (41:38) - Dangers of "Frontier" Models (54:06) - The Need for AI Safety Advocates (01:00:11) - Hot Takes and Pet PeevesLinksBlueDot Impact websiteDefense-in-DepthBlueDot Impact blogpost - Our vision for comprehensive AI safety trainingEngineering for Humans blogpost - The Swiss cheese model: Designing to reduce catastrophic lossesOpen Journal of Safety Science and Technology article - The Evolution of Defense in Depth Approach: A Cross Sectorial AnalysisX-clusion and X-riskNature article - AI Safety for EveryoneBen Kuhn blogpost - On being welcomingReflective Altruism blogpost - Belonging (Part 1: That Bostrom email)AIxBioRAND report - The Operational Risks of AI in Large-Scale Biological AttacksOpenAI "publication" (press release) - Building an early warning system for LLM-aided biological threat creationAnthropic Frontier AI Red Team blogpost - Why do we take LLMs seriously as a potential source of biorisk?Kevin Esvelt preprint - Foundation models may exhibit staged progression in novel CBRN threat disclosureAnthropic press release - Activating AI Safety Level 3 protectionsPersuasive AIPreprint - Lies, Damned Lies, and Distributional Language Statistics: Persuasion and Deception with Large Language ModelsNature Human Behavior article - On the conversational persuasiveness of GPT-4Preprint - Large Language Models Are More Persuasive Than Incentivized Human PersuadersAI, Anthropomorphization, and Mental HealthWestern News article - Expert insight: Humanlike chatbots detract from developing AI for the human goodAI & Society article - Anthropomorphization and beyond: conceptualizing humanwashing of AI-enabled machinesArtificial Ignorance article - The Chatbot TrapMaking Noise and Hearing Things blogpost - Large language models cannot replace mental health professionalsIdealogo blogpost - 4 reasons not to turn ChatGPT into your therapistJournal of Medical Society Editorial - Importance of informed consent in medical practiceIndian Journal of Medical Research article - Consent in psychiatry - concept, application & implicationsMedia Naama article - The Risk of Humanising AI Chabots: Why ChatGPT Mimicking Feelings Can BackfireBecker's Behavioral Health blogpost - OpenAI’s mental health roadmap: 5 things to knowMiscellaneous ReferencesCarnegie Council blogpost - What Do We Mean When We Talk About "AI Democratization"?Collective Intelligence Project policy brief - Four Approaches to Democratizing AIBlueDot Impact blogpost - How Does AI Learn? A Beginner's Guide with ExamplesBlueDot Impact blogpost - AI safety needs more public-facing advocacyMore Li-Lian LinksHumans of Minerva podcast websiteLi-Lian's book - Purple is the Noblest ShroudRelevant Podcasts from Kairos.fmScaling Democracy w/ Dr. Igor Krawczuk for AI safety exclusion and echo chambersGetting into PauseAI w/ Will Petillo for AI in warfare and exclusion in AI safety
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.