• AI Therapy with Alison Cerezo
    May 28 2025

    AI Co-Therapists with Alison Cerezo
    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel talk with Dr. Alison Cerezo, a clinical psychologist, professor, and Senior Vice President of Research at Mpathic, a company developing AI tools that support therapists in delivering more empathetic and precise care.

    They explore the growing role of AI in mental health, from real-time feedback during therapy sessions to tools that help clinicians detect risk, stay aligned with best practices, and reduce bias. Alison describes how Mpathic works as a co-therapist—supporting rather than replacing the human element of therapy.

    The conversation also digs into larger questions:

    • Can AI feel more empathetic than humans?
    • How do we avoid over-reliance on machines for emotional support?
    • And what does it really mean to design AI that complements rather than competes with people?


    This episode is a must-listen for anyone interested in the future of therapy, empathy, and AI—and what it looks like to build systems that enhance human care, not undermine it.

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show More Show Less
    52 mins
  • Empathy and AI with Michael Inzlicht
    May 15 2025

    Empathic Machines with Michael Inzlicht
    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Michael Inzlicht, professor of psychology at the University of Toronto and co-host of the podcast Two Psychologists Four Beers. Together, they explore the surprisingly effortful nature of empathy—and what happens when artificial intelligence starts doing it better than we do.

    Michael shares insights from his research into empathic AI, including findings that people often rate AI-generated empathy as more thoughtful, emotionally satisfying, and effortful than human responses—yet still prefer to receive empathy from a human. They unpack the paradox behind this preference, what it tells us about trust and connection, and whether relying on AI for emotional support could deskill us over time.

    This conversation is essential listening for anyone interested in the intersection of psychology, emotion, and emerging AI tools—especially as machines get better at sounding like they care.

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show More Show Less
    1 hr and 4 mins
  • Building Moral AI with Jana Schaich Borg
    May 1 2025

    How Do You Build a Moral AI? with Jana Schaich Borg

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Jana Schaich Borg, Associate Research Professor at Duke University and co-author of the book “Moral AI and How We Get There”. Together they explore one of the thorniest and most important questions in the AI age: How do you encode human morality into machines—and should you even try?

    Drawing from neuroscience, philosophy, and machine learning, Jana walks us through bottom-up and top-down approaches to moral alignment, why current models fall short, and how her team’s hybrid framework may offer a better path. Along the way, they dive into the messy nature of human values, the challenges of AI ethics in organizations, and how AI could help us become more moral—not just more efficient.

    This conversation blends practical tools with philosophical inquiry and leaves us with a cautiously hopeful perspective: that we can, and should, teach machines to care.

     Topics Covered:

    • What AI alignment really means (and why it’s so hard)

    • Bottom-up vs. top-down moral AI systems

    • How organizations get ethical AI wrong—and what to do instead

    • The messy reality of human values and decision making

    • Translational ethics and the need for AI KPIs

    • Personalizing AI to match your values

    • When moral self-reflection becomes a design feature

    Timestamps:

    00:00  Intro: AI Alignment — Mission Impossible?
    04:00  Why Moral AI Is So Hard (and Necessary)
    07:00  The “Spec” Story & Reinforcement Gone Wrong
    10:00  Anthropomorphizing AI — Helpful or Misleading?
    12:00  Introducing Jana & the Moral AI Project
    15:00  What “Moral AI” Really Means
    18:00  Interdisciplinary Collaboration (and Friction)
    21:00  Bottom-Up vs. Top-Down Approaches
    27:00  Why Human Morality Is Messy
    31:00  Building a Hybrid Moral AI System
    41:00  Case Study: Kidney Donation Decisions
    47:00  From Models to Moral Reflection
    52:00  Embedding Ethics Inside Organizations
    56:00  Moral Growth Mindset & Training the Workforce
    01:03:00  Why Trust & Culture Matter Most
    01:06:00  Comparing AI Labs: OpenAI vs. Anthropic vs. Meta
    01:10:00  What We Still Don’t Know
    01:11:00  Quickfire: To AI or Not To AI
    01:16:00  Jana’s Most Controversial Take
    01:19:00  Can AI Make Us Better Humans?

    🎧 Like this episode? Share it with a friend or leave us a review to help others discover the show.

    Let me know if you’d like an abridged version, pull quotes, or platform-specific text for Apple, Spotify, or LinkedIn.

    Show More Show Less
    1 hr and 22 mins
  • State of AI Risk with Peter Slattery
    Apr 16 2025

    Understanding AI Risks with Peter Slattery

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Peter Slattery, behavioral scientist and lead researcher at MIT’s FutureTech lab, where he spearheads the groundbreaking AI Risk Repository project. Together, they dive into the complex and often overlooked risks of artificial intelligence—ranging from misinformation and malicious use to systemic failures and existential threats.

    Peter shares the intellectual and emotional journey behind categorizing over 1,000 documented AI risks, how his team built a risk taxonomy from 17,000+ sources, and why shared understanding and behavioral science are critical for navigating the future of AI.

    This one is a must-listen for anyone curious about AI safety, behavioral science, and the future of technology that’s moving faster than most of us can track.

    --

    LINKS:

    • Peter's LinkedIn Profile
    • MIT FutureTech Lab: futuretech.mit.edu
    • AI Risk Repository


    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show More Show Less
    1 hr and 10 mins
  • Enter the AI Lab
    Mar 20 2025

    Enter the AI Lab: Insights from LinkedIn Polls and AI Literature Reviews

    In this episode of the Behavioral Design Podcast, hosts Samuel Salzer and Aline Holzwarth explore how AI is shaping behavioral design processes—from discovery to testing. They revisit insights from past LinkedIn polls, analyzing audience perspectives on which phases of behavioral design are best suited for AI augmentation and where human expertise remains crucial.

    The discussion then shifts to AI-driven literature reviews, comparing the effectiveness of various AI tools for synthesizing research. Samuel and Aline assess the strengths and weaknesses of different platforms, diving into key performance metrics like quality, speed, and cost, and debating the risks of over-reliance on AI-generated research without human oversight.

    The episode also introduces Nuance’s AI Lab, highlighting upcoming projects focused on AI-driven behavioral science innovations. The conversation concludes with a Behavioral Redesign series case study on Peloton, offering a fresh take on how AI and behavioral insights can reshape product experiences.

    If you're interested in the intersection of AI, behavioral science, and research methodologies, this episode is packed with insights on where AI is excelling—and where caution is needed.


    LINKS:

    • Nuance AI Lab: Website


    TIMESTAMPS:
    00:00 Introduction and Recap of Last Year's AI Polls
    06:27 AI's Strengths in Literature Review
    15:12 Emerging AI Tools for Research
    19:31 Evaluating AI Tools for Literature Reviews
    23:57 Comparing Chinese and American AI Tools
    26:01 Evaluating Literature Review Outputs
    28:12 Critical Analysis and Human Oversight
    35:19 The Worst Performing Model
    37:21 Introducing Nuance's AI Lab
    38:51 Behavioral Redesign Series: Peloton Example
    45:21 Podcast Highlights and Future Guests

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show More Show Less
    48 mins
  • When to AI, and When Not to AI with Eric Hekler
    Mar 6 2025

    When to AI, and When Not to AI with Eric Hekler

    "People are different. Context matters. Things change."

    In this episode of the Behavioral Design Podcast, Aline is joined by Eric Hekler, professor at UC San Diego, to explore the nuances of AI in behavioral science and health interventions. Eric’s mantra—emphasizing the importance of individual differences, context, and change—serves as a foundation for the conversation as they discuss when AI enhances behavioral interventions and when human judgment is indispensable.

    The discussion explores just-in-time adaptive interventions (JITAI), the efficiency trap of AI, and the jagged frontier of AI adoption—where machine learning excels and where it falls short. Eric shares his expertise on control systems engineering, human-AI collaboration, and the real-world challenges of scaling adaptive health interventions. The episode also explores teachable moments, the importance of domain knowledge, and the need for AI to support rather than replace human decision-making.

    The conversation wraps up with a quickfire round, where Eric debates AI’s role in health coaching, mental health interventions, and optimizing human routines.

    LINKS:

    • Eric Hekler:


    TIMESTAMPS:
    02:01 Introduction and Correction
    05:21 The Efficiency Trap of AI
    08:02 Human-AI Collaboration
    11:04 Conversation with Eric Hekler
    14:12 Just-in-Time Adaptive Interventions
    15:19 System Identification Experiment
    28:27 Control Systems vs. Machine Learning
    39:44 Challenges with Classical Machine Learning
    43:16 Translating Research to Real-World Applications
    49:49 Community-Based Research and Context Matters
    59:46 Quickfire Round: To AI or Not to AI
    01:08:27 Final Thoughts on AI and Human Evolution

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show More Show Less
    1 hr and 7 mins
  • Sci-Fi and AI: Exploring Annie Bot with Sierra Greer
    Feb 20 2025

    Sci-Fi and AI: Exploring Annie Bot with Sierra Greer

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel dive into the ethical, emotional, and societal complexities of AI companionship with special guest Sierra Greer, author of Annie Bot. This thought-provoking novel explores AI-human relationships, autonomy, and the blurred line between artificial intelligence and the human experience.

    Sierra shares her inspiration for Annie Bot and how sci-fi can serve as a lens to explore real-world ethical dilemmas in AI development.

    • The conversation covers the concept of reinforcement learning in AI and how it mirrors human conditioning
    • The gender dynamics embedded in AI design, and the ethical implications of AI companions.
    • The discussion also examines real-life cases of people forming deep emotional bonds with AI chatbots

    The episode rounds out with a lively quickfire round, where Sierra debates whether AI should replace lost loved ones, act as conversational assistants for introverts, or intervene in human arguments.

    This is a must-listen for fans of sci-fi, behavioral science, and those fascinated by the future of AI companionship and emotional intelligence.


    LINKS:

    • Sierra Greer website
    • Annie Bot – Official Book Page
    • Goodreads Profile


    TIMESTAMPS:

    01:43 AI Companions: A Controversial Opinion

    05:48 Exploring Sci-Fi and AI in Literature

    07:42 Introducing Sierra Greer and Her Book

    09:12 Reinforcement Learning Explained

    15:47 Diving into the World of Annie Bot

    23:17 Power Dynamics and Human-Robot Relationships

    32:31 Humanity and Artificial Intelligence

    41:31 Autonomy vs. Agreeableness in Relationships

    43:20 Reinforcement Learning in AI and Humans

    46:13 Ethics and Gaslighting in AI

    48:57 Gender Dynamics in AI Design

    57:18 AI Companions and Human Relationships

    01:06:45 Quickfire Round: To AI or Not to AI

    01:12:39 Final Thoughts and Controversial Opinions

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show More Show Less
    1 hr and 7 mins
  • AI and Behavioral Science in Public Policy with Laura de Molière
    Feb 6 2025

    AI and Behavioral Science in Public Policy with Laura de Moliere

    In this episode of the Behavioral Design Podcast, host Samuel Salzer is joined by Laura de Moliere, a behavioral scientist with deep expertise in applying behavioral insights to public policy. As the former head of behavioral science at the UK Cabinet Office, Laura has worked at the intersection of behavioral science and policymaking during some of the most high-stakes moments in recent history, including Brexit and COVID-19.

    Samuel and Laura explore the evolving role of AI in behavioral science, reflecting on how AI can enhance decision-making, improve policymaking, and surface unintended consequences. Laura shares her AI “aha moment”—when she realized the potential of large language models to support policymakers in making more behaviorally informed decisions.

    The discussion also covers the promises and perils of AI in behavioral science, the potential of synthetic users to test interventions, and the growing challenge of balancing AI’s capabilities with human biases and policymaking needs. The episode wraps up with a playful quickfire round, where Laura debates the use of AI in everything from tax optimization to gamified urinals.

    This episode is a must-listen for anyone interested in the intersection of AI, behavioral science, and public policy, offering a nuanced and thought-provoking perspective on the future of AI in decision-making.

    LINKS:

    Laura de Moliere:

    • LinkedIn Profile

    • INCASE Framework on Unintended Consequences


    TIMESTAMPS:

    00:00 A Surprise Gift

    05:38 Reflections on 2025

    09:28 AI and Behavioral Science

    19:29 Introducing Laura de Moliere

    21:30 Start of Laura interview

    33:08 Applying Behavioral Science to AI and Government

    35:16 Behavioral Science and AI: Use Cases and Impacts

    36:32 Understanding and Interacting with AI Models

    47:43 Synthetic Users and Their Potential

    01:01:08 Quickfire Round: To AI or Not to AI

    01:06:35 Controversial Opinions on AI

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show More Show Less
    1 hr and 11 mins