The Tinker Table cover art

The Tinker Table

The Tinker Table

By: Hannah Lloyd
Listen for free

About this listen

Welcome to The Tinker Table—a podcast where big ideas meet everyday questions. Hosted by engineering educator, researcher, and systems thinker Hannah Lloyd, this show invites curious minds to pull up a seat and explore the intersection of technology, ethics, design, and humanity. From AI ethics and digital literacy to intentional innovation and creative problem-solving, each episode breaks down complex topics into thoughtful, accessible conversations. Whether you’re a teacher, a parent, a healthcare worker, or just someone trying to keep up with a rapidly changing world—you belong here.Hannah Lloyd
Episodes
  • Episode 3: When AI gets it Wrong
    Jul 8 2025

    When artificial intelligence systems fail, the consequences aren’t always small—or hypothetical. In this episode of The Tinker Table, we dive into what happens after the error: Who’s accountable? Who’s harmed? And what do these failures tell us about the systems we’ve built?


    We explore real-world case studies like:


    The wrongful arrest of Robert Williams in Detroit due to facial recognition bias, The racially biased predictions of COMPAS, a sentencing algorithm used in U.S. courts, And how predictive policing tools reinforce historical over-policing in marginalized communities, We also tackle AI hallucinations—false but believable outputs from tools like ChatGPT and Bing’s Sydney —and the serious trust issues that result, from fake legal citations to wrongful plagiarism flags.


    Finally, we examine the dangers of black-box algorithms—opaque decision-making systems that offer no clarity, no appeal, and no path to accountability.


    📌 This episode is your reminder that AI is only as fair, accurate, and just as the humans who design it. We don’t just need smarter machines—we need ethically designed ones.


    🔍 Sources & Further Reading:


    Facial recognition misidentification

    Machine bias

    Predictive policing

    AI hallucinations


    🎧 Tune in to learn why we need more than innovation—we need accountability.

    Show More Show Less
    9 mins
  • Episode 2: Who is at the AI table?
    Jul 1 2025
    If AI is shaping our future, we have to ask: Who’s shaping AI? In this episode of The Tinker Table, Hannah digs into the essential question of representation in technology—and why it matters who gets invited to build the tools we all use. We explore how a lack of diversity in engineering and data science has led to real-world consequences: from facial recognition tools that misidentify women of color (Buolamwini & Gebru, MIT Media Lab, 2018) to healthcare algorithms that underestimated Black patients' needs by nearly 50% (Obermeyer et al., Science, 2019). This episode blends Hannah’s own research on belonging in engineering education with broader examples across healthcare, education, and AI development. You'll hear why representation isn’t just about race or gender—it’s about perspective, lived experience, and systemic change. And most importantly, we talk about what it means to build tech that truly works for everyone. Whether you’re a developer, educator, team leader, or thoughtful user—pull up a seat. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. Gender Shades: Intersectional accuracy Disparities in commercial gender classification. (2018). Proceedings of Machine Learning Research, 81, 1–15.
    Show More Show Less
    9 mins
  • Episode 1: Is AI Good?
    Jul 1 2025
    Is AI good? Is it bad? Or is it something more complicated—and more human—than we tend to admit? In this first episode of The Tinker Table, Hannah breaks down the foundations of AI ethics—what it is, why it matters, and where it shows up in our lives. From biased hiring algorithms that penalized women (Winick, 2022) to predictive systems shaped by decades-old redlining data (The Markup, 2021), and even soap dispensers that don’t detect darker skin tones (Fussell, 2017)—this episode explores the ways AI isn’t just about what we can build, but what we should. We ask: Who gets to shape the tools shaping our world? What values are embedded in our algorithms? And what happens when human bias becomes digital infrastructure? Whether you’re a teacher, parent, technologist, or simply AI-curious—this is the conversation to start with. Winick, E. (2022, June 17). Amazon ditched AI recruitment software because it was biased against women. MIT Technology Review. The secret bias hidden in Mortgage-Approval algorithms – the Markup. (2021, August 25). Fussell, S. (2017, August 17). Why Can’t This Soap Dispenser Identify Dark Skin? Gizmodo
    Show More Show Less
    10 mins

What listeners say about The Tinker Table

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.