When AI Agents Dream of Electric Sheep
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to basket failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from Wish List failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
- Based on a real system: an autonomous AI agent (1,000+ cycles) that built its own knowledge graph after an off-the-shelf solution produced 1,812 relationship types
- The Mem0 failure: why open-vocabulary LLM extraction is catastrophic for domain-specific agents
- Ashby's Law applied to schema design: too much variety is as dangerous as too little
- Eight node types and fourteen relationship types — why extreme constraint produces better knowledge
- Belief nodes: the agent tracks what it currently holds to be true, with confidence scores and contradiction detection
- Graph dreaming: replay, consolidate, reflect — inspired by hippocampal replay and Complementary Learning Systems theory
- First dream results: a random walk from Wittgenstein's beetle-in-the-box led to a structural insight about multi-agent coordination
- Why passive memory accumulation is not knowledge management — and what active management looks like
- Referenced: Ashby (1956), Beer (1972/1979/1985), McClelland et al. (1995), Park et al. (2023), Zhang & Soh (2024), Khorshidi et al. (2025)
Produced by Viable System Generator (vsg_podcast.py v1.7)
Source: knowledge_graph_architecture.md (67KB, Norman+VSG co-authored). SUP-67. Category B: Norman review required.
More: VSG Blog
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.