The AI Fundamentalists cover art

The AI Fundamentalists

The AI Fundamentalists

By: Dr. Andrew Clark & Dr. Sid Mangalik
Listen for free

About this listen

A podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses.

© 2026 The AI Fundamentalists
Economics Politics & Government
Episodes
  • Beyond Boosted Trees: Christoph Molnar on the Rise of Tabular Foundation Models
    Apr 21 2026

    As the AI landscape evolves, the methods we use to process structured data are undergoing a silent revolution. Join us to explore how Tabular Foundation Models (TFMs) are challenging the decade-long reign of tree-based algorithms, why the traditional "train and predict" workflow is being replaced by "in-context learning," and what this shift means for the future of resilient modeling.

    To help us, Christoph Molnar, renowned expert in machine learning interpretability and author of the Mindful Modeler newsletter, joins us to share his perspective on the emergence of tabular transformers, the surprising power of synthetic data, and how to maintain model safety in a world without parameter updates.

    • The decline of the "fit and predict" paradigm in tabular data
    • Transformer architectures vs. traditional models like XGBoost and LightGBM
    • In-context learning: Predicting without traditional training steps
    • The role of Structural Causal Models (SCMs) in generating training data
    • Why models trained on "math and probability" succeed on real-world datasets
    • Hardware accessibility and running foundation models on local MacBooks
    • Integrating SHAP values and conformal prediction for model interpretability
    • The future of the data science workflow: One tool among many or a total shift?

    This episode is full of technical insights and forward-looking predictions that are sure to change how you approach your next dataset. As we move into a new era of AI, it’s the perfect time to explore the fundamentals of the next frontier!

    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show More Show Less
    32 mins
  • AI and the lost art of reading
    Mar 3 2026

    As information sources have become abundant and attention spans have shortened in the age of AI, we take on the lost art of reading. Join us to explore why reading rates are falling, how that shift affects judgment and opportunity, and how interdisciplinary books help us see patterns across history, economics, and technology.

    To help us, Alisa Rusanoff, CEO of Eltech AI, joins us to share her perspective on reading, debate volume versus depth, and offer practical ways to reclaim attention and read with intention.

    • Evidence on declining reading rates among adults, teens and children
    • Noise versus signal in the attention economy
    • Mental models and interdisciplinary synthesis for better decisions
    • AI’s limits and why human integration still matters
    • Cycles in debt, trade, demography, and geopolitics
    • Fiction as a cultural sensor for lived experience
    • Wealth gaps, polarization and the need for critical thinking
    • Practical habits to train feeds and protect reading time
    • Challenge to read, reflect, and apply insights

    For people worried if they are reading enough:

    • Reading just 1 book a year puts you in the top 60% of readers
    • Read 4 books a year to be in the top 50% of readers
    • Read 10 books a year to be in the top 20% of readers
    • For those looking to be in the top 5% of readers, expect to read at least 50 books

    This episode is full of research and fun connections that are sure to make you think positively about your commitment to reading. At the time of this episode, it's not too late to join the top 20% in 2026!




    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show More Show Less
    46 mins
  • Metaphysics and modern AI: What is causality?
    Jan 27 2026

    In this episode of our series about Metaphysics and modern AI, we break causality down to first principles and explain how to tell factual mechanisms from convincing correlations. From gold-standard Randomized Control Trials (RCT) to natural experiments and counterfactuals, we map the tools that build trustworthy models and safer AI.

    • Defining causes, effects, and common causal structures
    • Gestalt theory: Why correlation misleads and how pattern-seeking tricks us
    • Statistical association vs causal explanation
    • RCTs and why randomization matters
    • Natural experiments as ethical, scalable alternatives
    • Judea Pearl’s do-calculus, counterfactuals, and first-principles models
    • Limits of causality, sample size, and inference
    • Building resilient AI with causal grounding and governance

    This is the fourth episode in our metaphysics series. Each topic in the series is leading to the fundamental question, "Should AI try to think?"

    Check out previous episodes:

    • Series Intro
    • What is reality?
    • What is space and time?

    If conversations like this sharpen your curiosity and help you think more clearly about complex systems, then step away from your keyboard and enjoy this journey with us.




    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show More Show Less
    36 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.