Episodes

  • AI Ghost Workers: The Human Cost [39]
    Apr 28 2026

    AI safety depends on invisible workers in Kenya and the Philippines who label disturbing content for hours daily. Studies show 81% develop severe PTSD after reviewing child abuse, torture, and mass violence to train AI filters. Big tech companies including Meta, TikTok, and OpenAI outsource this trauma while enforcing quotas of 700 items per day and nondisclosure agreements that silence workers. Kenya's content moderators are unionizing and fighting back. Is artificial intelligence worth the human cost?


    Episode notes at: https://aifreakyfacts.com/stories/


    Topics Covered:

    Artificial intelligence, AI dangers, AI ethics, content moderation labor, mental health crisis, PTSD depression anxiety, Kenya Philippines workers, AI training data, Sama TaskUs Majorel, Meta Facebook TikTok OpenAI, psychological trauma, quota systems, nondisclosure agreements, African Content Moderators Union, labor lawsuits, worker organizing, tech exploitation, AI Freaky Facts, AI podcast


    References:

    1. TIME Magazine (June 19, 2025) — "Exclusive: Global Safety Rules Aim to Protect AI's Most Traumatized Workers"

    https://time.com/7295662/ai-workers-safety-rules/

    2. The Bureau of Investigative Journalism (April 27, 2025) — "Meta's content moderators face worst conditions yet at secret Ghana site"

    https://www.thebureauinvestigates.com/stories/2025-04-27/suicide-attempts-sackings-and-a-vow-of-silence-metas-new-moderators-face-worst-conditions-yet

    Investigative report on conditions at Meta's Ghana moderation facility after Kenya lawsuits.

    3. Context by Thomson Reuters Foundation (July 3, 2025) — "Content moderators for Big Tech unite to tackle mental trauma"

    https://www.context.news/big-tech/content-moderators-for-big-tech-unite-to-tackle-mental-trauma

    4. IHRB (Institute for Human Rights and Business) (November 27, 2025) — "Content moderation is a new factory floor of exploitation"

    https://www.ihrb.org/latest/content-moderation-is-a-new-factory-floor-of-exploitation-labour-protections-must-catch-up

    5. ArXiv Research Paper (March 3, 2026) — "Beyond Content Exposure: Systemic Factors Driving Moderators' Mental Health Crisis in Africa"

    https://arxiv.org/html/2604.15321

    6. Computer Weekly (September 2024) — "Kenyan workers win High Court appeal to take Meta to trial"

    https://www.computerweekly.com/feature/Kenyan-workers-win-High-Court-appeal-to-take-Meta-to-trial

    Legal victory for 185 content moderators suing Meta and contractors over working conditions.

    7. Rest of World (December 20, 2023) — "Meta's content moderators in Kenya fight for lost pay"

    https://restofworld.org/2023/meta-content-moderators-kenya-fired-unionize/

    8. Rest of World (December 20, 2023) — "The man leading Kenyan content moderators' battle against Meta"

    https://restofworld.org/2023/kenya-content-moderators-battle-meta/

    9. Jacobin Magazine (February 14, 2024) — "Kenyan Courts Keep Telling Meta to Let Workers Unionize"

    https://jacobin.com/2024/02/kenya-courts-meta-content-moderation-union

    10. Digital Society Blog (HIIG) (October 30, 2025) — "Inside content moderation"

    https://www.hiig.de/en/inside-content-moderation/


    Music Credits:

    1. "Sad Violin 5" (Chrispixer)

    https://pixabay.com/music/classical-string-quartet-sad-violin-5-456715/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    2. "Dark Ambient Emotions Music" (DeusLower)

    https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    3. "Background Ambient Documentary" (AKTASOK)

    https://pixabay.com/music/corporate-background-ambient-documentary-173954/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    4. "Internal Scream" (alanajordan)

    https://pixabay.com/music/vocal-internal-scream-514312/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    This podcast is narrated by the host's own voice, powered by AI.

    Show More Show Less
    25 mins
  • AI Is Thirsty: The Water Crisis Behind Every Query [38]
    Apr 21 2026

    AI is thirsty, and you are paying the tab. Every query evaporates water from desert reservoirs to cool the servers that generate it. We investigate the hidden water crisis behind AI infrastructure, from the secretive Microsoft Buckeye facility to residents fined while nearby data centers drink the town dry. Who authorized this trade-off? Steve Atwal uncovers the staggering environmental cost of the AI boom and the technology that could fix it.


    Episode notes: https://aifreakyfacts.com/stories/


    Topics Covered:

    artificial intelligence, AI ethics, AI infrastructure, AI risks, data center water consumption, environmental impact of AI, water crisis, Arizona drought, Microsoft Buckeye, Intel Chandler, Loudoun County data centers, water cooling, immersion cooling, closed loop systems, water positivity, tech accountability, AI energy consumption, sustainable AI, AI Freaky Facts, Steve Atwal, ai podcast


    References:

    1. Data Centers' Water Use Is Hard to Track, Raising Concerns in the Drought-Prone West — KUNC / Mountain West News Bureau, April 2026

    https://www.kunc.org/2026-04-15/data-centers-water-hard-track-raising-concerns-drought-west

    2. The New Battleground: Water Rights and Data Center Development in the AI Era — Climate Solutions Legal Digest, April 2026

    https://www.climatesolutionslaw.com/2026/04/the-new-battleground-water-rights-and-data-center-development-in-the-ai-era/

    3. AI's Growing Thirst for Water Is Becoming a Public Health Risk — Al Jazeera, January 2026

    https://www.aljazeera.com/opinions/2026/1/21/ais-growing-thirst-for-water-is-becoming-a-public-health-risk

    4. Arizona's Water is Drying Up: That Won't Stop Its Data Center Rush — Grist, March 2026

    https://grist.org/technology/arizona-water-data-centers-semiconducters/

    5. Data Centers and Water Consumption — Environmental and Energy Study Institute (EESI)

    https://www.eesi.org/articles/view/data-centers-and-water-consumption

    6. Dateline Ashburn: The Thirst for AI Raises Alarms in Virginia — Broadband Breakfast, September 2025

    https://broadbandbreakfast.com/dateline-ashburn-the-thirst-for-ai-raises-alarms-in-virginia/

    7. Data Drain: The Land and Water Impacts of the AI Boom — Lincoln Institute of Land Policy, October 2025

    https://www.lincolninst.edu/publications/land-lines-magazine/articles/land-water-impacts-data-centers/

    8. Drained by Data: The Cumulative Impact of Data Centers on Regional Water Stress — Ceres, September 2025

    https://www.ceres.org/resources/reports/drained-by-data-the-cumulative-impact-of-data-centers-on-regional-water-stress

    9. As Data Centers Multiply in the Chesapeake Region, Water Use Increases Too — Bay Journal, October 2025

    https://www.bayjournal.com/news/pollution/as-data-centers-multiply-in-the-chesapeake-region-water-use-increases-too/article_ebcb4891-d6d6-4b42-8bb5-14bf61981531.html

    10. AI, Data Centers, and Water — Brookings Institution, November 2025

    https://www.brookings.edu/articles/ai-data-centers-and-water/


    Music Credits:

    1. "Sad Violin 4" (Chrispixer)

    https://pixabay.com/music/folk-sad-violin-4-343723/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    2. "Dark Ambient Emotions Music" (DeusLower)

    https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    3. "Background Ambient Documentary" (AKTASOK)

    https://pixabay.com/music/corporate-background-ambient-documentary-173954/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    4. "Rain on the Roof" (alanajordan)

    https://pixabay.com/music/indie-pop-rain-on-the-roof-394402/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    This podcast is narrated by the host's own voice, powered by AI.

    Show More Show Less
    25 mins
  • AI Erased One Billion Dollars in Debt: No Lawyers No Fees [37]
    Apr 14 2026

    AI is erasing billions in debt for families who cannot afford a lawyer, with no legal fees required. Upsolve built a nonprofit platform that automates complex legal forms for bankruptcy filers to help them find a fresh start. We explore why 92% of legal problems go unaddressed and why human oversight is a critical safety requirement for high stakes automation. Is justice finally becoming accessible through artificial intelligence?


    Episode notes: https://aifreakyfacts.com/stories/


    Topics Covered:

    artificial intelligence, AI ethics, AI and society, AI risks, access to justice, AI legal tools, debt relief, bankruptcy filing, legal aid, justice gap, AI accountability, AI safety, responsible AI, AI nonprofit, Upsolve, Jonathan Petts, AI paralegal, low income Americans, human oversight, AI Freaky Facts, Steve Atwal, ai podcast


    References:

    1. Upsolve Surpasses $1 Billion in Debt Relief for Low-Income Families — Forbes, March 9, 2026

    https://www.forbes.com/sites/fastforward/2026/03/09/upsolves-ai-paralegal-helps-erase-1b-in-debt/

    2. The Story Behind Upsolve — Jonathan Petts, Upsolve.org, December 5, 2025

    https://upsolve.org/learn/our-story/

    3. Justice Gap Research — Legal Services Corporation

    https://www.lsc.gov/initiatives/justice-gap-research

    4. The Justice Gap: Executive Summary — Legal Services Corporation, 2022

    https://justicegap.lsc.gov/resource/executive-summary/

    5. LSC Says $2 Billion Needed to Address Low-Income Americans Unmet Civil Legal Needs — Legal Services Corporation, April 2026

    https://www.lsc.gov/press-release/lsc-says-2-billion-needed-address-low-income-americans-unmet-civil-legal-needs

    6. White House Budget Proposes Eliminating LSC — Legal Services Corporation

    https://www.lsc.gov/press-release/white-house-budget-proposes-eliminating-lsc-defunding-civil-legal-aid-millions-low-income-americans

    7. Achieving Civil Justice — American Academy of Arts and Sciences

    https://www.amacad.org/publication/achieving-civil-justice/section/3

    8. Upsolve — Wikipedia

    https://en.wikipedia.org/wiki/Upsolve

    9. Bridging the $140 Billion Gap: How We Can Close the Unclaimed Benefits Crisis — Link Health, April 2025

    https://link-health.org/2025/04/22/bridging-the-140-billion-gap-how-we-can-close-the-unclaimed-benefits-crisis/

    10. AI and Technology Help Bridge Access to Justice — Pro Bono Institute, February 2026

    https://www.probonoinst.org/2026/02/06/ai-and-technology-help-bridge-access-to-justice/


    Music Credits:

    1. "Sad Thoughtful Serious Piano (Thoughts In Silence)" (Ashot_Danielyan)

    https://pixabay.com/music/main-title-sad-thoughtful-serious-piano-thoughts-in-silence-115091/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    2. "Dark Ambient Emotions Music" (DeusLower)

    https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    3. "Background Ambient Documentary" (AKTASOK)

    https://pixabay.com/music/corporate-background-ambient-documentary-173954/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    4. "Tell Your Story" (alanajordan)

    https://pixabay.com/music/pop-tell-your-story-417312/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    This podcast is narrated by the host's own voice, powered by AI.

    Show More Show Less
    25 mins
  • AI Will Outlive All of Us: And It Is Already Deciding What Survives [36]
    Apr 7 2026

    AI is now curating human history, deciding which documents, images, languages, and cultural memories survive in the global digital archive. At institutions like the National Library of Norway and the Internet Archive, machine‑learning systems process millions of artifacts a day, ranking what gets preserved and what quietly disappears. But these systems are trained on biased datasets shaped by centuries of unequal power. Indigenous, non‑Western, and marginalized communities have no oversight, no appeals process, and no seat at the table as artificial intelligence determines what future generations will know about them. When private companies and opaque algorithms control cultural memory, who decides what survives?


    Episode notes: https://aifreakyfacts.com/stories/


    References:

    1. National Library of Norway – NorHand AI Model (Transkribus)

    https://blog.transkribus.org/en/the-norhand-model-a-new-public-ai-model-national-library-of-norway

    2. How Can We Improve the Diversity of Archival Collections with AI? (Springer, February 2025)

    https://link.springer.com/article/10.1007/s00146-025-02222-z

    3. Internet Archive – Wayback Machine (Over 800 Billion Web Pages)

    https://www.eff.org/deeplinks/2026/03/blocking-internet-archive-wont-stop-ai-it-will-erase-webs-historical-record

    4. Internet Archive Europe – AI and Digital Preservation

    https://www.internetarchive.eu/2025/11/05/more-than-storage-on-world-digital-preservation-day-ai-is-helping-unlock-our-memories/

    5. Vesuvius Challenge – Grand Prize Winners (February 2024)

    https://scrollprize.org/grandprize

    6. University of Kentucky – Vesuvius Challenge Breakthrough

    https://uknow.uky.edu/research/grand-prize-discovery-made-2000-year-old-herculaneum-scrolls

    7. DeepMind – Aeneas AI for Ancient Text Restoration (Nature, July 2025)

    https://www.nature.com/articles/s41586-025-09292-5

    8. Tracing the Bias Loop: AI, Cultural Heritage and Bias-Mitigating in Practice (Springer, April 2025)

    https://link.springer.com/article/10.1007/s00146-025-02349-z

    9. Genus UK – AI Smart-Archives and Responsible Innovation

    https://genus.uk/ai-smart-archives-preservation/

    10. Historica.org – "How AI Is Changing Digital Archives: Possibilities and Pitfalls"

    https://www.historica.org/blog/ais-role-in-preserving-digital-archives


    Topics Covered:

    AI archiving, Digital preservation, Algorithmic curation of history, Cultural memory systems, National Library of Norway AI, Internet Archive machine learning, AI bias and data inequality, Indigenous and marginalized heritage, Data colonialism, Corporate control of archives, AI restoration of ancient texts, Vesuvius Challenge AI decoding, MIT CSAIL pottery reconstruction, AI translation and cultural nuance, UNESCO digital heritage governance, Algorithmic accountability, Democratic oversight of AI, AI ethics, AI Freaky Facts, Steve Atwal, AI documentary podcast


    Music Credits:

    1. "Atmospheric Dark Cinematic" (Lilliben)

    https://pixabay.com/music/mystery-atmospheric-dark-cinematic-365139/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    2. "Dark Ambient Emotions Music" (DeusLower)

    https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    3. "Background Ambient Documentary" (AKTASOK)

    https://pixabay.com/music/corporate-background-ambient-documentary-173954/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    4. "Rewrite the Future" (alanajordan)

    https://pixabay.com/music/electronic-rewrite-the-future-494220/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    This podcast is narrated by the host's own voice, powered by AI.

    Show More Show Less
    27 mins
  • AI Is Rewriting History: And Nobody Is Stopping It [35]
    Mar 31 2026

    AI is no longer just generating deepfakes. It is fabricating history itself. Artificial intelligence systems are creating fake historical photographs, forged archival records, and invented events now indexed by search engines and used in school projects. MIT researchers found that AI-edited visuals more than double the formation of false memories. Governments are warning. Historians are alarmed. Who controls history when anyone can fabricate it in three seconds?


    Episode notes: https://aifreakyfacts.com/stories/


    References:

    1. Vice — People Are Creating Records of Fake Historical Events Using AI (2023)

    https://www.vice.com/en/article/people-are-creating-records-of-fake-historical-events-using-ai/

    2. MIT Media Lab — Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection — Pataranutaporn et al., CHI (2025)

    https://www.media.mit.edu/projects/ai-false-memories/overview/

    3. Bloomberg / MIT Media Lab — AI Does Not Just Lie, It Can Make You Believe It — F.D. Flam (August 2025) — free, no paywall

    https://www.media.mit.edu/articles/ai-doesn-t-just-lie-it-can-make-you-believe-it/

    4. Epoch Magazine — Real Enough? How Forgeries and AI Hoaxes Shape Historical Memory (2026)

    https://www.epoch-magazine.com/post/real-enough-how-forgeries-and-ai-hoaxes-shape-historical-memory

    5. Historica.org — AI Hallucinations and the Risks to Historical Research Integrity (2025)

    https://www.historica.org/blog/ai-fictions-historiography-misinformation

    6. Stimson Center — AI in the Age of Fake (Imagined) Content (2026)

    https://www.stimson.org/2026/ai-in-the-age-of-fake-imagined-content/

    7. Wikipedia — Hallucination (Artificial Intelligence) — includes Deloitte government report hallucination cases

    https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

    8. American Historical Association — Guiding Principles for Artificial Intelligence in History Education (July 2025)

    https://www.historians.org/resource/guiding-principles-for-artificial-intelligence-in-history-education/

    9. Legal History Insights — The Specter of AI-Generated Historical Documents (2024)

    https://legalhistoryinsights.com/the-specter-of-ai-generated-historical-documents/

    10. MIT Media Lab — Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation — Pataranutaporn et al., IUI (2025)

    https://www.media.mit.edu/publications/slip-through-the-chat-subtle-injection-of-false-information-in-llm-chatbot-conversations-increases-false-memory-formation/


    Topics Covered:

    AI misinformation, AI deepfakes, Fake historical photos, AI-generated history, AI hallucinations, False memories and AI, Historical misinformation, AI and collective memory, AI archival manipulation, AI authenticity and provenance, MIT false memory research, AI regulation and ethics, AI and democracy, AI risks and dangers, AI Freaky Facts, Steve Atwal, AI documentary podcast


    Music Credits:

    1. "Atmospheric Dark Cinematic" (Lilliben)

    https://pixabay.com/music/mystery-atmospheric-dark-cinematic-365139/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    2. "Dark Ambient Emotions Music" (DeusLower)

    https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    3. "Background Ambient Documentary" (AKTASOK)

    https://pixabay.com/music/corporate-background-ambient-documentary-173954/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    4. "Memory Loss" (alanajordan)

    https://pixabay.com/music/pop-memory-loss-481722/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    This podcast is narrated by the host's own voice, powered by AI.

    Show More Show Less
    24 mins
  • AI Is Replacing Your Therapist: When Mental Health Apps Get It Wrong [34]
    Mar 24 2026

    AI mental health apps promise support anytime, anywhere, but the reality is far more dangerous. Studies show some AI therapy tools respond appropriately in less than 60% of interactions. For teenagers in crisis, certain companion apps failed 78% of the time. The sensitive data people share with these apps is often not protected by HIPAA at all. Is artificial intelligence in mental health care putting the most vulnerable people at risk?


    Episode notes: https://aifreakyfacts.com/stories/


    References:

    1. NPR — "With therapy hard to get, people lean on AI for mental health. What are the risks?" (September 2025)

    https://www.npr.org/sections/shots-health-news/2025/09/30/nx-s1-5557278/ai-artificial-intelligence-mental-health-therapy-chatgpt-openai

    2. Dartmouth College — "First Therapy Chatbot Trial Yields Mental Health Benefits" (March 2025)

    https://home.dartmouth.edu/news/2025/03/first-therapy-chatbot-trial-yields-mental-health-benefits

    3. MIT Technology Review — "The first trial of generative AI therapy shows it might help with depression" (March 2025)

    https://www.technologyreview.com/2025/03/28/1114001/the-first-trial-of-generative-ai-therapy-shows-it-might-help-with-depression/

    4. Stanford HAI — "Exploring the Dangers of AI in Mental Health Care" (2025)

    https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care

    5. American Psychological Association — "Using generic AI chatbots for mental health support: A dangerous trend" (March 2025)

    https://www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-therapists

    6. Psychology Today — "The Hidden Dangers of AI-Driven Mental Health Care" (January 2026)

    https://www.psychologytoday.com/us/blog/its-not-just-in-your-head/202601/the-hidden-dangers-of-ai-driven-mental-health-care

    7. Undark — "Researchers Weigh the Use of AI for Mental Health" (November 2025)

    https://undark.org/2025/11/04/chatbot-mental-health/

    8. Frontiers in Digital Health — "Balancing risks and benefits: clinicians' perspectives on the use of generative AI chatbots in mental healthcare" (May 2025)

    https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1606291/full

    9. WHO — "Towards responsible AI for mental health and well-being: experts chart a way forward" (March 2026)

    https://www.who.int/news/item/20-03-2026-towards-responsible-ai-for-mental-health-and-well-being--experts-chart-a-way-forward

    10. Stateline — "AI therapy chatbots draw new oversight as suicides raise alarm" (January 2026)

    https://stateline.org/2026/01/15/ai-therapy-chatbots-draw-new-oversight-as-suicides-raise-alarm/


    Topics Covered:

    artificial intelligence, AI mental health, AI therapy apps, mental health chatbot risks, AI crisis response, AI data privacy, therapy chatbot failures, AI regulation, mental health technology, AI ethics, AI dangers, AI risks, mental health apps, chatbot therapy, AI Freaky Facts, AI podcast


    Music Credits:

    1. "Sad Thoughtful Serious Piano (Thoughts In Silence)" (Ashot_Danielyan)

    https://pixabay.com/music/main-title-sad-thoughtful-serious-piano-thoughts-in-silence-115091/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    2. "Dark Ambient Emotions Music" (DeusLower)

    https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    3. "Ambient Emotional Cinematic" (RomanSenykMusic)

    https://pixabay.com/music/build-up-scenes-ambient-emotional-cinematic-126143/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    4. "Protected by Angels" (alanajordan)

    https://pixabay.com/music/pop-protected-by-angels-340111/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    This podcast is narrated by the host's own voice, powered by AI.

    Show More Show Less
    27 mins
  • AI Trapped You in Your Reality: And Called It Personalization [33]
    Mar 17 2026

    AI recommendation algorithms are not just showing you content. They are shaping the reality you live in. This episode exposes how Facebook, YouTube, and TikTok create filter bubbles, drive algorithmic radicalization, and fracture shared reality. From the Facebook emotional contagion experiment to internal documents showing users fall into negative bubbles within 30 minutes. Is artificial intelligence dangerous when it decides what you believe before you do?


    Episode notes: https://aifreakyfacts.com/stories/


    References:

    1. Giansiracusa, N. (2025). How the Secret Algorithms Behind Social Media Actually Work. TIME Magazine, August 7, 2025.

    https://time.com/7308120/secret-algorithms-behind-social-media/

    2. Kramer, A.D.I., Guillory, J.E., & Hancock, J.T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788-8790.

    https://www.pnas.org/doi/10.1073/pnas.1320040111

    3. Haroon, M. et al. (2023). YouTube, The Great Radicalizer? Auditing and Mitigating Ideological Biases in YouTube Recommendations. PNAS.

    https://www.pnas.org/doi/10.1073/pnas.2213020120

    4. Liu et al. (2025). Algorithmic Recommendations Have Limited Effects on Polarization. University of Pennsylvania / Princeton.

    https://dcknox.github.io/files/LiuEtAl_AlgoRecsLimitedPolarizationYouTube.pdf

    5. Filter bubble - Wikipedia.

    https://en.wikipedia.org/wiki/Filter_bubble

    6. States Probed TikTok for Years - Internal Documents. OPB / NPR, October 2024.

    https://www.opb.org/article/2024/10/11/tiktok-knows-its-app-is-harming-kids-new-internal-documents-show/

    7. European Commission. Digital Services Act - Article 27: Recommender System Transparency.

    https://www.eu-digital-services-act.com/Digital_Services_Act_Article_27.html

    8. European Commission. Digital Services Act: Keeping Us Safe Online (2025).

    https://commission.europa.eu/news-and-media/news/digital-services-act-keeping-us-safe-online-2025-09-22_en

    9. DSA Observatory (2024). The Regulation of Recommender Systems Under the DSA.

    https://dsa-observatory.eu/2024/11/22/the-regulation-of-recommender-systems-under-the-dsa-a-transition-from-default-to-multiple-and-dynamic-controls/

    10. MDPI / Society (2025). Trap of Social Media Algorithms: A Systematic Review on Filter Bubbles, Echo Chambers, and Their Impact on Youth.

    https://www.mdpi.com/2075-4698/15/11/301


    Topics Covered:

    AI recommendation algorithms, artificial intelligence, filter bubbles, echo chambers, social media algorithms, algorithmic radicalization, Facebook emotional contagion, AI personalization, AI risks, AI and democracy, EU Digital Services Act, AI regulation, AI and mental health, confirmation bias, polarization, AI Freaky Facts


    Music Credits:

    1. Mystery (The_Mountain)

    https://pixabay.com/music/build-up-scenes-mystery-163875/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    2. "Dark Atmospheric Soundscape" (Fopihe)

    https://pixabay.com/music/ambient-dark-atmospheric-soundscape-325384/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    3. "inspiration - Calm & Uplifting Ambient Music" (Clavier-Music)

    https://pixabay.com/music/ambient-inspiration-calm-amp-uplifting-ambient-music-318243/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    4. "One Heart" (alanajordan)

    https://pixabay.com/music/pop-one-heart-428675/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    This podcast is narrated by the host's own voice, powered by AI.

    Show More Show Less
    23 mins
  • AI Is Running Your City: And Nobody Voted For It [32]
    Mar 11 2026

    AI is now running your city, and nobody asked if that was acceptable. In this episode of AI Freaky Facts, we investigate the hidden smart city AI infrastructure, from AI surveillance and facial recognition to the accountability gaps in automated license plate readers performing 20 billion scans monthly. We expose what happens when city AI fails and explore how Amsterdam and Helsinki are setting a global benchmark for Artificial intelligence transparency. Is AI dangerous when it controls your street? The answer is already unfolding.


    Episode notes: https://aifreakyfacts.com/stories/


    References:

    1. Why Some Cities Are Canceling Flock License Plate Reader Contracts — NPR.

    https://www.npr.org/2026/02/17/nx-s1-5612825/flock-contracts-canceled-immigration-survillance-concerns

    2. Cities Ditch License Plate Readers Over Immigration Surveillance Fears — US News / Associated Press.

    https://www.usnews.com/news/best-states/california/articles/2026-03-05/out-of-state-police-access-silicon-valley-license-plate-readers

    3. San Jose Restricts Use of License Plate Readers — San José Spotlight.

    https://sanjosespotlight.com/san-jose-weighs-new-safeguards-for-flock-license-plate-reader-cameras/

    4. Cities Join Amazon in Ending Flock Partnership After Super Bowl Ad — Fortune.

    https://fortune.com/2026/03/03/cities-end-flock-partnership-amazon-ring-surveillance-super-bowl-ad/

    5. Facial Recognition in Detroit — Project Green Light Explained — Outlier Media.

    https://outliermedia.org/facial-recognition-detroit-police-explained/

    6. Amsterdam's AI Register — OECD.AI Policy Initiative.

    https://oecd.ai/en/dashboards/policy-initiatives/amsterdams-ai-register-8123

    7. Helsinki and Amsterdam Launch AI Registers to Detail City Systems — AI for Good / ITU.

    https://aiforgood.itu.int/helsinki-and-amsterdam-launch-ai-registers-to-detail-city-systems/

    8. AI and Smart Cities — Surveillance Trade-Off — Diplo Foundation.

    https://www.diplomacy.edu/blog/ai-smart-cities-and-the-surveillance-trade-off/

    9. AI and Democracy — Mapping the Intersections — Carnegie Endowment for International Peace.

    https://carnegieendowment.org/research/2026/01/ai-and-democracy-mapping-the-intersections

    10. How AI Is Reshaping Local Government and Raising Ethical Dilemmas — University of Virginia School of Data Science / ICMA Magazine.

    https://datascience.virginia.edu/news/how-ai-reshaping-local-government-and-raising-ethical-dilemmas


    Topics Covered:

    artificial intelligence, AI surveillance, AI risks, AI safety, AI and privacy, AI accountability, AI transparency, AI governance, AI regulation, AI bias, AI and civil rights, AI and democracy, AI and society, facial recognition, smart city AI, urban AI, digital privacy rights, data surveillance, algorithmic accountability, is AI dangerous, AI Freaky Facts, AI podcast, Detroit AI, Amsterdam AI, Helsinki AI, San Francisco AI, automated license plate readers (ALPR)


    Music Credits:

    1. "Traffic in City" (storegraphic)

    https://pixabay.com/sound-effects/city-traffic-in-city-309236/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    2. "Cinematic Moog Motion" (Zen_Man)

    https://pixabay.com/music/pulses-cinematic-moog-motion-3507/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    3. "Somber Monolith" (Psychronic)

    https://pixabay.com/music/ambient-somber-monolith-408777/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    4. "Soft Music" (NastelBom)

    https://pixabay.com/music/ambient-soft-music-495878/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    5. "Northwind crew" (ParadiseKing)

    https://pixabay.com/music/vocal-northwind-crew-425252/

    Free for use under the Pixabay license - https://pixabay.com/service/license-summary/

    This podcast is narrated by the host's own voice, powered by AI.

    Show More Show Less
    23 mins