• AI Trailer
    Aug 29 2025

    "Artificial Intelligence: AI at the Edge" - A 100-episode journey into the heart of AI, hosted by Maitt Saiwyer. From the foundational theories of Turing and Minsky to the cutting-edge breakthroughs in deep learning and reinforcement learning, we explore the science, history, ethics, and politics that define artificial intelligence. Delve into the human side of AI, confronting its costs, its promises, and its potential futures—from corporate power struggles in Silicon Valley to the dream and danger of a technological singularity. Whether you're a builder, a skeptic, a policymaker, or simply curious, join us as we uncover the imagination and insight you need to understand the technology shaping our century.

    Show More Show Less
    1 min
  • Episode 50 – Limits of Deep Learning
    Aug 28 2025

    Deep learning may be powerful, but it isn’t magic. This episode closes the first half of the series by examining the limitations of today’s models: their hunger for data and energy, their lack of reasoning and common sense, and their vulnerability to bias and adversarial attacks. We’ll discuss why critics warn that deep learning alone may not lead to general intelligence, and why researchers are already searching for the next paradigm shift.

    Show More Show Less
    48 mins
  • Episode 49 – AI in the Wild
    Aug 28 2025

    By the mid-2010s, deep learning was leaving the lab and entering the real world. This episode highlights how neural networks transformed medicine, from cancer detection to drug discovery, revolutionized translation through systems like Google Translate, and even entered the arts through generative creativity. We’ll explore the successes and failures of applying AI to messy, high-stakes environments—and what this revealed about the promises and limits of deep learning.

    Show More Show Less
    35 mins
  • Episode 48 – AIMA: The Textbook
    Aug 28 2025

    Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig has been the most widely used AI textbook for decades. This episode explores why AIMA became the gold standard for AI education, covering everything from search and logic to probabilistic reasoning and learning. We’ll reflect on how the book has adapted across editions to keep pace with AI’s growth, and why its balanced perspective makes it essential reading for anyone entering the field.

    Show More Show Less
    33 mins
  • Episode 47 – Deep Reinforcement Learning
    Aug 28 2025

    Reinforcement learning (RL) has always been about trial and error—but with deep learning, RL reached new heights. This episode explains how agents like DQN learned to play Atari games directly from pixels and how policy gradient methods advanced robotics and decision-making. We’ll also discuss the limitations of deep RL—its brittleness, sample inefficiency, and ethical concerns—and why it remains both one of AI’s most powerful and most challenging frontiers.

    Show More Show Less
    34 mins
  • Episode 46 – AlphaGo and Beyond
    Aug 28 2025

    When DeepMind’s AlphaGo defeated world champion Lee Sedol in 2016, it stunned the world. This episode dives into how reinforcement learning and neural nets combined to master the ancient game of Go—a feat once thought impossible. We’ll unpack the algorithms behind AlphaGo, the cultural significance of its victory, and how similar methods now power breakthroughs in strategy, optimization, and science.

    Show More Show Less
    21 mins
  • Episode 45 – Transformers Transform AI
    Aug 28 2025

    In 2017, a paper titled Attention Is All You Need introduced transformers—a new architecture that would reshape AI. This episode explains how self-attention mechanisms allowed models to scale, capture context, and power today’s large language models like GPT. We’ll explore why transformers outperformed previous models, how they revolutionized NLP, and why they’re now being applied far beyond text to images, video, and protein folding.

    Show More Show Less
    34 mins
  • Episode 44 – RNNs and LSTMs
    Aug 28 2025

    Sequential data—like speech, text, and time series—requires memory. This episode explores recurrent neural networks (RNNs) and long short-term memory networks (LSTMs), which gave AI the ability to capture temporal patterns. We’ll look at how LSTMs solved vanishing gradient problems, enabling breakthroughs in language modeling, translation, and speech recognition. Though later eclipsed by transformers, RNNs and LSTMs paved the way for modern NLP.

    Show More Show Less
    37 mins