Episode 3: When AI gets it Wrong cover art

Episode 3: When AI gets it Wrong

Episode 3: When AI gets it Wrong

Listen for free

View show details

About this listen

When artificial intelligence systems fail, the consequences aren’t always small—or hypothetical. In this episode of The Tinker Table, we dive into what happens after the error: Who’s accountable? Who’s harmed? And what do these failures tell us about the systems we’ve built?


We explore real-world case studies like:


The wrongful arrest of Robert Williams in Detroit due to facial recognition bias, The racially biased predictions of COMPAS, a sentencing algorithm used in U.S. courts, And how predictive policing tools reinforce historical over-policing in marginalized communities, We also tackle AI hallucinations—false but believable outputs from tools like ChatGPT and Bing’s Sydney —and the serious trust issues that result, from fake legal citations to wrongful plagiarism flags.


Finally, we examine the dangers of black-box algorithms—opaque decision-making systems that offer no clarity, no appeal, and no path to accountability.


📌 This episode is your reminder that AI is only as fair, accurate, and just as the humans who design it. We don’t just need smarter machines—we need ethically designed ones.


🔍 Sources & Further Reading:


Facial recognition misidentification

Machine bias

Predictive policing

AI hallucinations


🎧 Tune in to learn why we need more than innovation—we need accountability.

What listeners say about Episode 3: When AI gets it Wrong

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.