Ep 29: Crash-Testing AI for Mental Health with Shirali and Arul Nigam cover art

Ep 29: Crash-Testing AI for Mental Health with Shirali and Arul Nigam

Ep 29: Crash-Testing AI for Mental Health with Shirali and Arul Nigam

Listen for free

View show details

About this listen

EPISODE SUMMARY

In this episode, Rachel sits down with Shirali Nigam and Arul Nigam, sibling co-founders of Circuit Breaker Labs, a company built around a simple but urgent idea: AI mental health tools should be rigorously tested for safety before they ever reach a real user. Shirali brings a background in AI safety, psychology, and technology, along with an MBA from the Wharton School at the University of Pennsylvania. Arul contributes expertise in AI applications for healthcare and studied operations, analytics, and global business at Georgetown University's McDonough School of Business. Together, they walk Rachel through their framework for agentic red-teaming, a method of sending AI-powered simulated patients into conversations with mental health chatbots to find the vulnerabilities before vulnerable people do. The conversation covers how they got here personally, why the probabilistic nature of large language models makes exhaustive testing so essential, and what they are actually finding in the field, including how something as small as a misspelled word can be enough to bypass a safety guardrail.

The second half of the conversation turns to the bigger picture: who is using Circuit Breaker Labs, what clinicians and parents should look for when evaluating AI tools, and what good policy in this space could actually look like. Rachel and the Nigams explore the tension between moving fast in the startup world and the high stakes of getting things wrong in mental health. Shirali and Arul make the case for independent, third-party safety validation before products launch, rather than enforcement after harm has already occurred, drawing a comparison to food and automobile safety standards. They also push back on the idea of banning AI in mental health altogether, arguing that with a 320-to-one patient-to-provider ratio and growing wait times for care, AI used responsibly has real potential to bridge the access gap. The episode closes with a look at what is next for Circuit Breaker Labs and why they see this work as only growing more urgent over time.

RESOURCES MENTIONED

Articles Referenced

New study: AI chatbots systematically violate mental health ethics standards | Brown University

New study warns of risks in AI mental health tools | Stanford Report

https://www.circuitbreakerlabs.ai/Whitepaper.pdf

Connect with Shirali and Arul Nigam

Website: https://www.circuitbreakerlabs.ai

Connect with The Mental Health Evolution

  • Website: https://www.traumaspecialiststraining.com/mental-health-evolution-podcast

  • Instagram: /thementalhealthevolution/

  • LinkedIn: /the-mental-health-evolution

  • Facebook: /TheMentalHealthEvolution

Music Credit: Music by Zach Harrison

No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.