The Behavioral Design Podcast cover art

The Behavioral Design Podcast

The Behavioral Design Podcast

By: Samuel Salzer and Aline Holzwarth
Listen for free

About this listen

How can we change behavior in practice? What role does AI have to play in behavioral design? Listen in as hosts Samuel Salzer and Aline Holzwarth speak with leading experts on all things behavioral science, AI, design, and beyond. The Behavioral Design Podcast from Habit Weekly and Nuance Behavior provides a fun and engaging way to learn about applied behavioral science and how to design for behavior change in practice. The latest season explores the fascinating intersection of Behavioral Design and AI. Subscribe and follow! For questions or to get in touch, email podcast@habitweekly.com.Samuel Salzer and Aline Holzwarth Science Social Sciences
Episodes
  • AI Therapy with Alison Cerezo
    May 28 2025

    AI Co-Therapists with Alison Cerezo
    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel talk with Dr. Alison Cerezo, a clinical psychologist, professor, and Senior Vice President of Research at Mpathic, a company developing AI tools that support therapists in delivering more empathetic and precise care.

    They explore the growing role of AI in mental health, from real-time feedback during therapy sessions to tools that help clinicians detect risk, stay aligned with best practices, and reduce bias. Alison describes how Mpathic works as a co-therapist—supporting rather than replacing the human element of therapy.

    The conversation also digs into larger questions:

    • Can AI feel more empathetic than humans?
    • How do we avoid over-reliance on machines for emotional support?
    • And what does it really mean to design AI that complements rather than competes with people?


    This episode is a must-listen for anyone interested in the future of therapy, empathy, and AI—and what it looks like to build systems that enhance human care, not undermine it.

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show More Show Less
    52 mins
  • Empathy and AI with Michael Inzlicht
    May 15 2025

    Empathic Machines with Michael Inzlicht
    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Michael Inzlicht, professor of psychology at the University of Toronto and co-host of the podcast Two Psychologists Four Beers. Together, they explore the surprisingly effortful nature of empathy—and what happens when artificial intelligence starts doing it better than we do.

    Michael shares insights from his research into empathic AI, including findings that people often rate AI-generated empathy as more thoughtful, emotionally satisfying, and effortful than human responses—yet still prefer to receive empathy from a human. They unpack the paradox behind this preference, what it tells us about trust and connection, and whether relying on AI for emotional support could deskill us over time.

    This conversation is essential listening for anyone interested in the intersection of psychology, emotion, and emerging AI tools—especially as machines get better at sounding like they care.

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show More Show Less
    1 hr and 4 mins
  • Building Moral AI with Jana Schaich Borg
    May 1 2025

    How Do You Build a Moral AI? with Jana Schaich Borg

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Jana Schaich Borg, Associate Research Professor at Duke University and co-author of the book “Moral AI and How We Get There”. Together they explore one of the thorniest and most important questions in the AI age: How do you encode human morality into machines—and should you even try?

    Drawing from neuroscience, philosophy, and machine learning, Jana walks us through bottom-up and top-down approaches to moral alignment, why current models fall short, and how her team’s hybrid framework may offer a better path. Along the way, they dive into the messy nature of human values, the challenges of AI ethics in organizations, and how AI could help us become more moral—not just more efficient.

    This conversation blends practical tools with philosophical inquiry and leaves us with a cautiously hopeful perspective: that we can, and should, teach machines to care.

     Topics Covered:

    • What AI alignment really means (and why it’s so hard)

    • Bottom-up vs. top-down moral AI systems

    • How organizations get ethical AI wrong—and what to do instead

    • The messy reality of human values and decision making

    • Translational ethics and the need for AI KPIs

    • Personalizing AI to match your values

    • When moral self-reflection becomes a design feature

    Timestamps:

    00:00  Intro: AI Alignment — Mission Impossible?
    04:00  Why Moral AI Is So Hard (and Necessary)
    07:00  The “Spec” Story & Reinforcement Gone Wrong
    10:00  Anthropomorphizing AI — Helpful or Misleading?
    12:00  Introducing Jana & the Moral AI Project
    15:00  What “Moral AI” Really Means
    18:00  Interdisciplinary Collaboration (and Friction)
    21:00  Bottom-Up vs. Top-Down Approaches
    27:00  Why Human Morality Is Messy
    31:00  Building a Hybrid Moral AI System
    41:00  Case Study: Kidney Donation Decisions
    47:00  From Models to Moral Reflection
    52:00  Embedding Ethics Inside Organizations
    56:00  Moral Growth Mindset & Training the Workforce
    01:03:00  Why Trust & Culture Matter Most
    01:06:00  Comparing AI Labs: OpenAI vs. Anthropic vs. Meta
    01:10:00  What We Still Don’t Know
    01:11:00  Quickfire: To AI or Not To AI
    01:16:00  Jana’s Most Controversial Take
    01:19:00  Can AI Make Us Better Humans?

    🎧 Like this episode? Share it with a friend or leave us a review to help others discover the show.

    Let me know if you’d like an abridged version, pull quotes, or platform-specific text for Apple, Spotify, or LinkedIn.

    Show More Show Less
    1 hr and 22 mins

What listeners say about The Behavioral Design Podcast

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.