Episodes

  • Redesigning the Syllabus for Deeper Learning: AI, Empathy, and Assessment
    Dec 17 2025

    Join us for an insightful conversation with Dr. Dana Riger, UNC's inaugural Faculty Fellow for Generative AI, as she guides us through the rapid paradigm shift brought on by AI in higher education. Dr. Riger shares her journey from a "fear-driven" assessment redesign, after discovering ChatGPT, to developing a nuanced, values-driven framework for integrating and avoiding AI in the classroom.

    We dive into practical strategies, like redesigning traditional research papers into creative, AI-avoidant multimedia projects, and intentionally integrating AI for skills development, such as using chatbots for practice dialogues on polarizing topics. Dr. Riger also addresses the institutional challenge of avoiding "one-size-fits-all" AI policies and underscores the importance of fostering an open dialogue. Ultimately, this episode offers a compelling vision for the future of teaching, emphasizing that the human educator's unique value lies in fostering empathy, presence, and critical dialogue, not just imparting knowledge.

    Key Discussion Points:

    • The AI Paradigm Shift: Dr. Riger's initial reaction to ChatGPT and her immediate, fear-driven assessment redesign in 2022.

    • The Nuanced Approach: Distinguishing between AI-avoidant (experiential, creative) and AI-integrated (intentional skill-building) assessments.

    • Practical Examples: How a multimedia project replaces a traditional paper, and using AI to practice difficult, emotionally laden conversations.

    • Leading with Collaboration: Why policing AI use is ineffective and the importance of respecting student autonomy and ethical objections.

    • Institutional Guidance: The missteps of mandated, uniform AI policies and the need for a thoughtful "middle ground" approach.

    • The Value of Process: Shifting assessment focus from the final product to the process of learning (drafts, revisions, process logs).

    • The Core Question: What are the unique, human-centered qualities (empathy, presence) that educators must prioritize in the age of AI?

    Show More Show Less
    42 mins
  • Trailblazing AI Literacy: Connor Mulvaney’s Rural Classroom Revolution (Rebroadcast)
    Nov 19 2025
    In this episode from the archives, Montana science teacher and district AI lead Connor Mulvaney joins host Lydia Kumar to share how he turned fishing photos, traffic-light rubrics, and a healthy dose of curiosity into AI leadership in Montana and across the nation. Fresh off announcing aiEDU’s largest Trailblazers Fellowship expansion, Connor shares stories about leading students and educators to responsible AI adoption. In this episode, you’ll learn:
    • Break-the-Ice Questions – Three questions that instantly surface student misconceptions (and enthusiasm) about AI.
    • Fake Fish, Real Ethics – Using deepfake trout to spark serious debate on consent, bias, and digital citizenship.
    • Trailblazers 2.0 – What’s inside the 10-week fellowship (virtual sessions, $875 stipend, national recognition) and why rural teachers asked for it.
    This episode is for K-12 educators, district leaders, and mission-driven education organizations who want to shift AI conversations from fear and plagiarism to possibility and purpose.
    Show More Show Less
    40 mins
  • Danelle Brostrom on Leading AI: Privacy, Humanity, and Progress in Schools
    Nov 12 2025

    K-12 EdTech coach Danelle Brostrom joins us to talk about bringing curiosity, guardrails, and humanity to AI in schools. We dig into what we should learn from the social-media era, how librarians are frontline partners for information literacy, the real risks inside edtech privacy policies (and how districts can negotiate them), and concrete ways AI can expand access, like instant translation, reading-level adjustments, and executive-function supports. If you’re a district leader, principal, or teacher trying to move from paralysis to practical action, this conversation is your on-ramp.

    Key Takeaways
    • Don’t repeat social media’s mistakes. Protect in-person connection; teach students how to spot manipulated media and deepfakes.

    • Librarians = misinformation SWAT team. Pair EdTech with media specialists to teach reverse-image search, corroboration, and bias checks.

    • AI is already in your stack. Inventory tools teachers use; many “non-AI” products now include AI features that touch student data.

    • Equity in action. Real-time translation, leveled texts, and scaffolded task breakdowns can immediately widen access—offer to all students.

    • PD that sticks. Start with low-stakes personal uses (meal plans, resumes), then ethics, then classroom workflows—build a safe space to wrestle.

    • Listen first. Talk to students about how they’re using AI; invite skeptics to the table.

    • Leadership mindset. Curiosity, grace, and progress over perfection.

    Show More Show Less
    37 mins
  • 24. Duke's Ahmed Boutar on AI Alignment: Ensuring Users Get Desired Results
    Nov 5 2025

    In this episode, we’re joined by Ahmed Boutar, an Artificial Intelligence Master’s Student at Duke University, who brings a rigorous engineering focus to the ethics and governance of AI. Ahmed’s work centers on ensuring new technology aligns with human values, including his research on Human-Aligned Hazardous Driving (HAHD) systems for autonomous vehicles.

    This conversation is an urgent exploration of the practical and ethical challenges facing education and industry as AI progresses rapidly. Ahmed provides a critical perspective on how to maintain human judgment and oversight in a world increasingly powered by Large Language Models.

    Key Takeaways
    • The Interpretation Imperative: The most critical role of an educator today is to ensure that students move beyond simply accepting AI output to interpreting it, explaining it, and wrestling with the material in their own words. This is the ultimate guardrail against outsourcing thinking.

    • The Alignment Problem: AI failures often stem from misalignment between the intended goal (outer alignment) and the goal the AI actually optimizes for (inner alignment). The chilling example provided is an AI that solved the objective of "moving the fastest" by designing a tall structure that immediately fell down to maximize speed.

    • Transparency is Governance: For high-stakes decisions like loan applications or hiring, users and regulators must demand transparency into why an AI made a prediction. Responsible development requires diverse perspectives on design teams to prevent innate biases in training data from causing discrimination.

    • Adoption Over Abandonment: As humans, we cannot stop AI's progress. Instead, we must adopt it to augment productivity, while simultaneously creating policy and guardrails that ensure fair and responsible use.

    • A Hope for Scientific Discovery: While concerned about the concentration of AI development in a few large companies, Ahmed remains optimistic about AI's potential in scientific fields like drug discovery and proactively addressing global crises, as seen during the COVID-19 pandemic.

    Show More Show Less
    44 mins
  • The Lifeline of Learning: Dr. Sawsan Jaber on Radical Love, Agency, and Humanizing Education in the Age of AI
    Oct 29 2025

    In this episode, we’re joined by Dr. Sawsan Jaber, a global educator, equity strategist, and author of Pedagogies of Voice. Dr. Jaber’s work is rooted in her lived experience as the daughter of refugees and her profound belief that classrooms must be healing spaces that nurture student voice and radical love.

    This conversation is an urgent exploration of how K-12 leaders can balance the adoption of AI with the non-negotiable mission of humanizing education, ensuring that new technology becomes a tool for liberation, not a weapon for assimilation.

    Key Takeaways
    • The Pendulum of Power: Education constantly swings between standardization (which turns students into "invisible statistics") and human-centered reform. AI presents a moment to resist the swing and focus on qualitative, asset-based learning.

    • Teaching as a Lifeline: Core curriculum skills must be framed as "liberatory skills," like teaching a period as a tool to force a reader to sit in your words, giving students the power to advocate for themselves and their communities.

    • The Criticality Problem: Dr. Jaber cautions against the "dystopian thinking" of letting AI do the thinking. Leaders must prioritize teaching criticality and inquiry, ensuring students never sacrifice unique thought for easily generated output.

    • Trust is the Best AI Detector: The foundation for responsible AI use is built through trust-based relationships. Educators must co-create norms with students and model vulnerability, positioning themselves as fellow learners rather than simply gatekeepers.

    • The Antidote to Hate: Classrooms should be healing spaces that build radical love and mutual understanding. This mission is the most powerful antidote to the culture of fear and single-story narratives that plague society today.

    Show More Show Less
    48 mins
  • 3: Redefining Education with AI: Vera Cubero on Project-Based Learning and Human Connection
    Oct 22 2025

    In this episode from the archives, we’re joined by Vera Cubero, the Emerging Technologies Consultant for the North Carolina Department of Public Instruction (NCDPI) and a co-author of one of the nation's first K-12 AI guidelines. Vera shares her frontline experience transitioning from a classroom teacher piloting 1-to-1 Chromebooks to leading a statewide AI initiative. This conversation is a crucial exploration of how education must fundamentally change its approach—moving beyond simple tech "substitution" to truly "redefine" learning, assessment, and the role of the teacher to prepare all students for an AI-driven future.

    Key Takeaways
    • Beyond the Digital Worksheet: Vera warns that AI in education risks repeating the failures of 1-to-1 Chromebook adoption, where "substitution" (digital worksheets) won out over true learning "redefinition."

    • The AI-Enabled Project: The future of learning isn't just using AI; it's pairing AI with Project-Based Learning (PBL). AI becomes a powerful tool for students to solve complex, real-world problems, moving assessment away from simple essays.

    • Durable Skills Over Rote Answers: Vera argues that AI makes rote memorization obsolete. The new curriculum must focus on building "durable skills" like critical thinking, collaboration, and creativity—skills the future workforce demands.

    • The Guide on the Side: AI doesn't replace teachers; it changes their role. The focus must shift from the "sage on the stage" (delivering content) to the "guide on the side" (coaching, fostering human connection, and guiding student inquiry).

    • AI as the Great Equalizer: Vera's biggest concern is equity. Public schools must act as the "great equalizer," ensuring all students—especially from marginalized communities—gain AI fluency, or the economic divide will widen dramatically.

    Show More Show Less
    40 mins
  • 22. The Steam Engine of Software: Kris Younger on Transforming Education in the Age of AI
    Oct 15 2025

    In this episode, we’re joined by Kris Younger, a longtime technologist and the Director of Education at Zip Code Wilmington, a nonprofit coding bootcamp. Zip Code is on the absolute frontier of technology, helping adults from diverse backgrounds, who often earn between $30,000 and $35,000 per year, rapidly transition into tech careers with salaries in the mid-eighties, all in just 12 intense weeks.

    Kris shares his unique perspective on how the role of the software developer is fundamentally changing, shifting from a "coder" to a "programmer" who is more like a business analyst and a director. This conversation is an urgent exploration of how to make education nimble enough to prepare students for the future of work, not the past.

    Key Takeaways

    • The Age of Steam Programming: Kris likens the arrival of generative AI to the shift from sailing ships to steam engines, the fundamental skills needed to build software have changed forever.

    • From Coding to Management: Traditional computer science knowledge of search routines and algorithms is being taken over by LLMs. The crucial human skills are now critical thinking, communication, and management of the AI tools.

    • Projects are the New Exam: In a world where LLMs can generate code, the only effective way to assess knowledge is through project-based work that demands group collaboration and real-world delivery (like building a Slack clone in a week).

    • Weaponize AI in Response: Instead of trying to ban AI, educators must change the assignment. AI is now a power tool; the education challenge is to teach people how to think critically enough to manage that tool effectively.

    • The On-Ramp Problem: Kris's biggest concern is that businesses, confused about the future, will cut off entry-level hiring, denying themselves the adaptable, open-minded new talent who haven't yet learned "what's impossible."

    Show More Show Less
    47 mins
  • 21. AI Engineer Vihaan Nama on Privacy, Practice, and Empowered Learning
    Oct 8 2025

    In this episode, we’re visiting Duke University to meet Vihaan Nama, an AI engineer, researcher, and teaching assistant helping shape how AI is taught and built for the real world. From roles at PS&S and JPMorgan to graduate courses on explainable AI and product management, Vihaan brings a rare combination of technical depth and educator insight.

    If you’ve ever wondered how to make AI education more human, or how to turn student learning data into actionable insight, personalized support, or even a study partner, Vihaan offers both clarity and concrete examples.

    We talk about everything from his early experiments in sentiment analysis to why open-source models matter for student privacy, how retrieval-augmented generation (RAG) is quietly transforming knowledge work, and what schools can do right now to prepare for custom AI tools of their own.

    Key Takeaways
    • Your Notes, Your Assistant: Vihaan envisions a future where students can chat with their own lecture notes, using LLMs to review, revise, and apply information in their own language and context.

    • From Archive to Advantage: Companies (and schools!) are sitting on decades of underused data. With the right AI systems, that information becomes actionable knowledge.

    • Trust Through Transparency: Grounding AI outputs in clear, credible sources is key to building trust, especially in high-stakes environments like education and public services.

    • Small Models, Big Wins: As open-source LLMs become lighter and faster, even modestly funded schools can host private AI tools, no cloud dependency required.

    • Responsible AI = Responsive Leadership: From sustainability audits to ethical guardrails, Vihaan emphasizes that building AI responsibly starts with knowing what your organization values most.

    Show More Show Less
    49 mins