• AI Agents in Production (part 2)
    Feb 3 2026
    From reactive chatbots to agents that plan, delegate, and think across extended timescales. We explore Deep Agents and the Recursive Language Models paradigm that's redefining AI in 2026.

    What's the difference between a chatbot and an agent? A chatbot responds, an agent acts.

    In this episode, we go deep on:
    • Deep Agents: systems that plan and delegate like project managers
    • Planning patterns: hierarchical decomposition, reactive planning, plan repair
    • Recursive Language Models: context folding that enables multi-day tasks without degradation
    • Production architectures: how Manus and Claude Code orchestrate complex agents
    • The future: autonomous, collaborative, and metacognitive agents
    The future of AI isn't in bigger models, it's in intelligent architecture around them.

    This episode includes AI-generated content.
    Show More Show Less
    15 mins
  • AI Agents in Production (part 1)
    Jan 28 2026
    Why does 60% of an AI agent's success have nothing to do with the model? In this episode, we explore context engineering: the hidden discipline that separates impressive demos from systems that actually work in production.

    Ever built an AI agent that was brilliant in testing but failed miserably in production? The problem isn't the model. It's the context.

    In this episode, we cover:
    • Context rot: why agents "forget" and degrade over time
    • Context blindness: when the agent has information but can't use it
    • Context hallucination: the danger of plausible inventions
    • Memory architecture: hot, warm, and cold memory for robust agents
    • Production patterns: what Manus and Claude Code do behind the scenes
    If you're building with AI, this episode will change your approach.

    This episode includes AI-generated content.
    Show More Show Less
    13 mins
  • Context Rot
    Nov 21 2025
    This episode of Rooting exposes the hidden enemy of modern agent architectures: context rot. We explore why long context windows aren’t a silver bullet, how attention budgets degrade over time, and the four ways rot shows up in real systems—poisoning, distraction, confusion, and clash. Listeners learn why million-token prompts still fail, why observability must extend into the model’s working memory, and how emerging strategies such as isolation, selective retrieval, compression, external memory, semantic chunking, and standards such as MCP are reshaping how robust agents are built. This is a practical, technical deep-dive for architects and developers who want their AI systems to survive contact with reality.

    This episode includes AI-generated content.
    Show More Show Less
    23 mins
  • Formal Logic - pt. 02
    Sep 3 2025
    We are moving past the limitations of probabilistic AI by embracing formal logic and verification. This approach allows us to mathematically prove that our AI systems will behave correctly under specific conditions, providing a new level of trust and reliability essential for critical applications in finance, healthcare, and beyond.

    This is the second and last part of the episode.
    Show More Show Less
    13 mins
  • Formal Logic - pt. 01
    Aug 26 2025
    We are moving past the limitations of probabilistic AI by embracing formal logic and verification. This approach allows us to mathematically prove that our AI systems will behave correctly under specific conditions, providing a new level of trust and reliability essential for critical applications in finance, healthcare, and beyond.

    This is the first part of a two-part podcast
    Show More Show Less
    19 mins
  • Context Engineering
    Aug 19 2025
    We go beyond prompt engineering to focus on context engineering, a systematic approach to building production-grade AI. By treating the agent's context as a structured, observable pipeline, we enable teams to create robust, cost-effective, and scalable AI systems that deliver real business value.
    Show More Show Less
    17 mins
  • Memory
    Aug 12 2025
    What separates a forgetful chatbot from an AI agent that feels truly smart?

    In this episode, Root unpacks the power of memory in AI—from basic sliding windows to advanced retrieval-augmented and memory-augmented strategies inspired by neuroscience.

    Discover how the right memory architecture enables personalization, learning, and adaptability in your agents.

    Whether you’re building customer support bots, coding copilots, or next-gen assistants, you’ll get practical examples, production tips, and a glimpse into the future of multi-modal, real-time memory in AI.

    Tune in and learn how to make every interaction count!
    Show More Show Less
    17 mins
  • A Bigger Context
    Aug 5 2025
    Dive into the evolving world of AI context windows with Rooting. This episode unpacks the surprising paradox of 'more memory' in AI models: exploring the immense benefits of larger context, the hidden limitations like reasoning degradation, and the cutting-edge engineering techniques—from positional encoding tricks to novel reasoning architectures—that are shaping how AI truly understands and remembers. Essential listening for solution architects, software developers, and data scientists navigating the complexities of large language models.
    Show More Show Less
    29 mins