• Episode 025: Reflections on LLMs and AI with Dr. Garrison
    Sep 28 2025

    As we close out Season 2 and our emphasis on LLMs, we had the distinct privilege of chatting with Dr. Elizabeth Garrison. She is one of the few people in the world with domain expertise spanning behavior analysis (BCBA) and artificial intelligence (PhD).

    In this episode, we reflect on the state of AI research and industry work pre-ChatGPT and post-ChatGPT release, the shift in academic AI research when the transformer architecture became broadly available, and the differences between academia and industry in both behavior science and AI.

    Show More Show Less
    1 hr and 8 mins
  • Episode 024: Are we in an AI bubble?
    Sep 6 2025

    "Bubbles" are an economic phenomenon characterized by a rapid increase in asset prices that far exceed the asset's underlying fundamental value, driven by speculative buying and herd behavior rather than intrinsic worth.

    In this episode, Jake and David ask, "Are we in an AI bubble?". And, if so, what might this mean for both individuals and organizations as they navigate the current AI strategic landscape?

    Show More Show Less
    1 hr and 7 mins
  • Episode 023: Your Brain on LLMs
    Aug 31 2025

    In this episode, Jake and David discuss the burgeoning area of research looking at how interacting with LLMs impacts our skills and abilities in good and bad ways. As with most things in life, the effects are not black-and-white. And, we discuss strategies and tactics we can all engage in to try to get the benefits without the drawbacks.

    Show More Show Less
    1 hr and 14 mins
  • Episode 022: The Ethics of LLMs that Few Talk About
    Aug 9 2025

    Conversations around AI ethics often focus on a suite of incredibly important topics such as data security and privacy, model bias, model transparency, and explainability. However, each time we use large AI models (e.g., diffusion models, LLMs), we reinforce a host of additional potentially unethical practices that are needed to build and maintain these systems.

    In this episode, Jake and David discuss some of these unsavory topics, such as human labor costs and environmental impact. Although it's a bit of a downer, it's crucial for each of us to acknowledge how our behavior impacts the larger ecosystem and recognize our role in perpetuating these practices.

    Show More Show Less
    1 hr and 11 mins
  • Episode 021: Explainable AI and LLMs
    Aug 3 2025

    "Explainable AI", aka XAI, refers to a suite of techniques to help AI system developers and AI system users understand why inputs to the system resulted in the observed outputs.

    Industries such as healthcare, education, and finance require that any system using mathematical models or algorithms to influence the lives of others is transparent and explainable.

    In this episode, Jake and David review what XAI is, classical techniques in XAI, and the burgeoning area of XAI techniques specific to LLM-driven systems.

    Show More Show Less
    1 hr and 13 mins
  • Episode 020: Evidence-Based Practices for Prompt Engineering
    Jul 20 2025

    Prompt engineering involves a lot more than simply getting smarter with how you structure the prompts you enter in an LLM browser interface.

    Furthermore, a growing body of peer-reviewed research provides us with best practices to improve the accuracy and reliability of LLM outputs for the specific tasks we build systems around.

    In this episode, Jake and David review evidence-based best practices for prompt engineering and, importantly, highlight what proper prompt engineering requires such that most of us likely cannot call ourselves prompt engineers.

    Show More Show Less
    1 hr and 8 mins
  • Episode 019: LLM Evaluation Frameworks
    Jul 6 2025

    Lots of people like to talk about the importance of prompts, context, and what is sent to an LLM. Few discuss the even more important aspect of an LLM-driven system in evaluating its output.

    In this episode, we discuss traditional and modern metrics used to evaluate LLM outputs. And, we review the common frameworks for obtaining that feedback.

    Though evals are a lot of work (and easy to do poorly), those building (or buying) LLM-driven systems should be transparent about their process and the current state of their eval framework.

    Show More Show Less
    1 hr and 28 mins
  • Episode 018: Data Privacy and Security Considerations When Working with LLMs
    Jun 29 2025

    Jake and David chat about best practices and considerations for those building and using AI systems that leverage LLMs.

    Show More Show Less
    1 hr and 12 mins