The People's AI: The Decentralized AI Podcast cover art

The People's AI: The Decentralized AI Podcast

The People's AI: The Decentralized AI Podcast

By: Jeff Wilser
Listen for free

About this listen

Who will own the future of AI? The giants of Big Tech? Maybe. But what if the people could own AI, not the Big Tech oligarchs? This is the promise of Decentralized AI. And this is the podcast for in-depth conversations on topics like decentralized data markets, on-chain AI agents, decentralized AI compute (DePIN), AI DAOs, and crypto + AI. From host Jeff Wilser, veteran tech journalist (from WIRED to TIME to CoinDesk), host of the "AI-Curious" podcast, and lead producer of Consensus' "AI Summit." Season 3, presented by Vana.

© 2026 The People's AI: The Decentralized AI Podcast
Episodes
  • The Robots Are Already Here—The Data Gap Is What’s Holding Them Back
    Feb 4 2026

    What happens when robots stop looking like industrial machines—and start looking (and even feeling) human? And if “replicants” become plausible within our lifetimes, what would it take to get there… and what might it break along the way?

    In this episode of The People’s AI, presented by the Vana Foundation, we explore the robot revolution from three angles: what robots can actually do today (quietly, at scale), what’s likely in the near-term (especially in warehouses, logistics, healthcare, and elder care), and what the more radical futures imply—humanoids, “fleshbots,” and the thorny question of rights and personhood.

    A through-line across every conversation: the hidden constraint isn’t just hardware or dexterity—it’s data. Robotics doesn’t have an LLM-sized training corpus, and that gap shapes everything from progress timelines to privacy concerns and labor dynamics. We also dig into an under-discussed limiter: power consumption, and why energy efficiency may quietly govern how ubiquitous robots can become.

    Guests

    • Thomas Frey — Futurist (former IBM engineer)
    • Dr. Aniket Bera — Director of the IDEAS Lab at Purdue University
    • Jeff Mahler — Co-founder & CTO, Ambi Robotics

    What we cover

    • Why most impactful robots won’t look humanoid (at least at first)
    • Specialized machines—crane-like systems, warehouse sorters, mobile carts—are already delivering value because they can be engineered for reliability in constrained environments.
    • The robots already among us (even if we don’t notice them)
    • Warehousing and supply chain, recycling and waste sorting, mobile delivery systems, and surgical robotics are all expanding—often out of public view.
    • Humanoid robots: where they might actually make sense
    • Homes, hospitals, assisted living, and caregiving settings—places where human spaces and human expectations matter—may be the earliest “real” markets.
    • Robots in science and medicine: the bullish case
    • Lab automation, drug discovery loops, high-throughput testing, and more precise (and potentially remote) surgical procedures could be some of the most meaningful gains.
    • The true bottleneck: the robot data gap
    • LLMs feast on web-scale text. Robots need massive volumes of real-world interaction data—vision, touch, force, motion, and the consequences of actions.
    • How robot companies may collect data (and what that implies)
    • Motion-capture / imitation learning (wearables that mirror human movement), teleoperation (“humans in the loop” controlling robots remotely), simulation, and deployment flywheels that generate production data.
    • Privacy + labor: the coming debate
    • If robots learn from human environments and human demonstrations, who owns that data—and who gets paid for producing it?
    • A final irony: why humanoids might win more share than we expect
    • We have endless data of humans doing tasks—videos, demonstrations, routines—so humanoid form factors may benefit from transfer learning advantages, even if they’re not mechanically optimal.

    About Vana

    The People’s AI is presented by the Vana Foundation, supporting a new internet rooted in data sovereignty and user ownership—where individuals, not corporations, govern their own data and share the value it creates.

    Learn more at Vana.org.

    Show More Show Less
    43 mins
  • AI’s Original Sin: Training on Stolen Work
    Jan 21 2026

    What happens when AI gets smarter by quietly consuming the work of writers, artists, and publishers—without asking, crediting, or paying? And if the “original sin” is already baked into today’s models, what does a fair future look like for human creativity?

    In this episode, we examine the fast-moving collision between generative AI and copyright: the lived experience of authors who feel violated, the legal logic behind “fair use,” and the emerging battle over whether the real infringement is training—or the outputs that can mimic (or reproduce) protected work.

    What we cover

    • A writer’s gut-level reaction to AI training on her books—and why it feels personal, not merely financial. (00:00:00–00:02:00)
    • Pirate sites as the prequel to the AI era: how “free library” scams evolved into training data pipelines. (00:04:00–00:08:00)
    • The market-destruction fear: if models can spin up endless “sequels,” what happens to the livelihood—and identity—of authors? (00:10:00–00:12:30)
    • The legal landscape: why some courts are treating training as fair use, and how that compares to the Google Books precedent. (00:13:00–00:16:30)
    • Two buckets of lawsuits: (1) training as infringement vs. fair use, and (2) outputs that may be too close to copyrighted works (lyrics, Darth Vader-style images, etc.). (00:17:00–00:20:30)
    • Consent vs. compensation: why permission-based regimes might make AI worse (and messy to administer), and why “everyone gets paid” may be mathematically underwhelming for individual creators. (00:21:00–00:25:00)
    • The “archery” thought experiment: should machines be allowed to “learn from books” the way humans do—and where the analogy breaks. (00:26:00–00:29:30)
    • The licensing paradox: if training is fair use, why are AI companies signing licensing deals—and could this be a strategy to “pull up the ladder” against future competitors? (00:30:00–00:33:30)
    • Medium’s blunt framework: the 3 C’s—consent, credit, compensation—and why the fight may be about leverage and power as much as law. (00:34:00–00:43:00)
    • A bigger, scarier question: if AI becomes genuinely great at novels and storytelling, how do we preserve the human spark—and do we risk normalizing a “kleptocracy” of culture? (00:49:00–00:53:00)

    Guests

    • Rachel Vail — Book author (children’s + YA)
    • Mark Lemley — Director, Stanford Program in Law, Science and Technology
    • Tony Stubblebine — CEO, Medium

    Presented by Vana Foundation.

    Vana supports a new internet rooted in data sovereignty and user ownership—so individuals (not corporations) can govern their data and share in the value it creates. Learn more at vana.org.

    If this one sparked a reaction—share it with a writer friend, a founder building in AI, or anyone who thinks “fair use” is a settled question.

    Show More Show Less
    50 mins
  • Generation Generative: Raising Kids with AI “Friends” in a World of Data Extraction and Bias
    Jan 7 2026

    What happens when a “kid-friendly” AI bedtime story turns racy—inside your own car?

    In this episode of The People’s AI (presented by the Vana Foundation), we explore “Generation Generative”: how kids are already using AI, what the biggest risks really are (from inappropriate content to emotional manipulation), and what practical parenting looks like when the tech is everywhere—from smart speakers to AI companions.

    We hear from Dr. Mhairi Aitken (The Alan Turing Institute) on why children’s voices are largely missing from AI governance, Dr. Sonia Tiwari on smart toys and early-childhood AI characters, and Dr. Michael Robb (Common Sense Media) on what his research is finding about teens and AI companions—plus a grounded, parent-focused conversation with journalist (and parent) Kate Morgan.

    Takeaways

    • Kids often understand AI faster—and more ethically—than adults assume (especially around fairness and bias).
    • The “AI companion” category is different from general chatbots: it’s designed to feel personal, and that can be emotionally sticky (and potentially manipulative).
    • Guardrails are inconsistent, age assurance is weak, and “safe by default” still isn’t a safe assumption.
    • The long game isn’t just content risk—it’s intimacy + data: systems that learn a child’s inner life over years may shape identity, relationships, and worldview.
    • Parents don’t need perfection—but they do need ongoing, low-drama conversations and some shared rules.

    Guests

    • Dr. Michael Robb — Head of Research, Common Sense
    • https://www.commonsensemedia.org/bio/michael-robb
    • Dr. Sonia Tiwari — Children’s Media Researcher
    • https://www.linkedin.com/in/soniastic/
    • Dr. Mhairi Aitken — Senior Ethics Fellow, The Alan Turing Institute
    • https://www.turing.ac.uk/people/research-fellows/mhairi-aitken
    • Kate Morgan — Journalist

    Presented by the Vana Foundation

    Vana supports a new internet rooted in data sovereignty and user ownership—so individuals (not corporations) can govern their data and share in the value it creates. Learn more at vana.org.

    Show More Show Less
    51 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.