The AI Fundamentalists cover art

The AI Fundamentalists

The AI Fundamentalists

By: Dr. Andrew Clark & Sid Mangalik
Listen for free

About this listen

A podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses.

© 2025 The AI Fundamentalists
Economics Politics & Government
Episodes
  • Metaphysics and modern AI: What is causality?
    Jan 27 2026

    In this episode of our series about Metaphysics and modern AI, we break causality down to first principles and explain how to tell factual mechanisms from convincing correlations. From gold-standard Randomized Control Trials (RCT) to natural experiments and counterfactuals, we map the tools that build trustworthy models and safer AI.

    • Defining causes, effects, and common causal structures
    • Gestalt theory: Why correlation misleads and how pattern-seeking tricks us
    • Statistical association vs causal explanation
    • RCTs and why randomization matters
    • Natural experiments as ethical, scalable alternatives
    • Judea Pearl’s do-calculus, counterfactuals, and first-principles models
    • Limits of causality, sample size, and inference
    • Building resilient AI with causal grounding and governance

    This is the fourth episode in our metaphysics series. Each topic in the series is leading to the fundamental question, "Should AI try to think?"

    Check out previous episodes:

    • Series Intro
    • What is reality?
    • What is space and time?

    If conversations like this sharpen your curiosity and help you think more clearly about complex systems, then step away from your keyboard and enjoy this journey with us.




    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show More Show Less
    36 mins
  • Why validity beats scale when building multi‑step AI systems
    Jan 6 2026

    In this episode, Dr. Sebastian (Seb) Benthall joins us to discuss research from his and Andrew's paper entitled “Validity Is What You Need” for agentic AI that actually works in the real world.

    Our discussion connects systems engineering, mechanism design, and requirements to multi‑step AI that creates enterprise impact to achieve measurable outcomes.

    • Defining agentic AI beyond LLM hype
    • Limits of scale and the need for multi‑step control
    • Tool use, compounding errors, and guardrails
    • Systems engineering patterns for AI reliability
    • Principal–agent framing for governance
    • Mechanism design for multi‑stakeholder alignment
    • Requirements engineering as the crux of validity
    • Hybrid stacks: LLM interface, deterministic solvers
    • Regression testing through model swaps and drift
    • Moving from universal copilots to fit‑for‑purpose agents

    You can also catch more of Seb's research on our podcast. Tune in to Contextual integrity and differential privacy: Theory versus application.


    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show More Show Less
    40 mins
  • 2025 AI review: Why LLMs stalled and the outlook for 2026
    Dec 22 2025

    Here it is! We review the year where scaling large AI models hit its ceiling, Google reclaimed momentum with efficient vertical integration, and the market shifted from hype to viability.

    Join us as we talk about why human-in-the-loop is failing, why generative AI agents validating other agents compounds errors, and how small expert data quietly beat the big models.

    • Google’s resurgence with Gemini 3.0 and TPU-driven efficiency
    • Monetization pressures and ads in co-pilot assistants
    • Diminishing returns from LLM scaling
    • Human-in-the-loop pitfalls and incentives
    • Agents vs validation and compounding error
    • Small, high-quality data outperforming synthetic
    • Expert systems, causality, and interpretability
    • Research trends return toward statistical rigor
    • 2026 outlook for ROI, governance, and trust

    We remain focused on the responsible use of AI. And while the market continues to adjust expectations for return on investment from AI, we're excited to see companies exploring "return on purpose" as the new foray into transformative AI systems for their business.


    What are you excited about for AI in 2026?


    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show More Show Less
    42 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.