• The Trojan Horse of AI
    Dec 24 2025

    In this final guest episode of the year, we explore AI as a kind of Trojan horse: a technology that promises one thing while carrying hidden costs inside it. Those costs show up in data centers, energy and water systems, local economies, and the communities asked to host the infrastructure that makes AI possible.

    We’re joined by Jon Ippolito and Joline Blais from the University of Maine for a conversation that starts with AI’s environmental footprint and expands into questions of extraction, power, education, and ethics.

    In this episode, we discuss:

    • Why AI can function as a Trojan horse for data extraction and profit
    • What data centers actually do, and why they matter
    • The environmental costs hidden inside “innovation” narratives
    • The difference between individual AI use and industrial-scale impact
    • Why most data center activity isn’t actually AI
    • How communities are pitched data centers—and what’s often left out
    • The role of gender in ethical decision-making in tech
    • What AI is forcing educators to rethink about learning and work
    • Why asking “Who benefits?” still cuts through the hype
    • And how dissonance can be a form of clarity

    Resources mentioned:

    • IMPACT Risk framework: https://ai-impact-risk.com
    • What Uses More:
      https://what-uses-more.com

    Guests:

    • Jon Ippolito – artist, writer, and curator who teaches New Media and Digital Curation at the University of Maine.
    • Joline Blais – researches regenerative design, teaches digital storytelling and permaculture, and advises the Terrell House Permaculture Center at the University of Maine.

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show More Show Less
    1 hr and 21 mins
  • Easy for Humans, Hard for Machines: The Paradox Nobody Talks About
    Dec 17 2025

    Why can AI crush law exams and chess grandmasters, yet still struggle with word games? In this episode, Kimberly and Jessica use Moravec's Paradox to unpack why machines and humans are "smart" in such different ways—and what that means for how we use AI at work and in daily life.

    They start with a practical fact-check on agentic AI: what actually happens to your data when you let tools like ChatGPT or Gemini access your email, calendar, or billing systems, and which privacy toggles are worth changing. From there, they dive into why AI fails at the New York Times' Connections game, how sci-fi anticipated current concerns about AI psychology decades ago, and what brain-computer interfaces like Neuralink tell us about embodiment and intelligence.

    Along the way: sycophantic bias, personality tests for language models, why edtech needs more friction, and a lighter "pit and peach" segment with unexpected life hacks.


    Resources by Topic

    Privacy & Security (ChatGPT)

    • OpenAI Memory & Controls (Official Guide): https://openai.com/index/memory-and-new-controls-for-chatgpt/
    • OpenAI Data Controls & Privacy FAQ: https://help.openai.com/en/articles/7730893-data-controls-faq
    • OpenAI Blog: Using ChatGPT with Agents: https://platform.openai.com/docs/guides/agents

    Moravec's Paradox & Cognitive Science

    • Moravec's Paradox (Wikipedia): https://en.wikipedia.org/wiki/Moravec's_paradox
    • "The Moravec Paradox" - Research Paper: https://en.wikipedia.org/wiki/Hans_Moravec


    Sycophancy & LLM Behavior

    • "Sycophancy in Large Language Models: Causes and Mitigations" (arxiv): https://arxiv.org/abs/2411.15287

    • "Personality Testing of Large Language Models: Limited Temporal Stability, but Highlighted Prosociality": https://royalsocietypublishing.org/doi/10.1098/rsos.240180

    Brain-Computer Interfaces & Embodied AI

    • Neuralink: "A Year of Telepathy" Update: https://neuralink.com/updates/a-year-of-telepathy/

    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Show More Show Less
    46 mins
  • AI Agents Shift, Not SAVE, Your Time (Don't Be Fooled by Marketing Hype)
    Dec 10 2025

    What happens when you automate away a six-hour task? You don't get more free time ... you just do more work.

    In this impromptu conversation, Kimberly and Jessica break down what agentic AI actually does, why the "time savings" narrative misses the point entirely, and how to figure out which workflows are worth automating.

    WHAT WE COVER:

    • What agentic AI actually is (and how it's different from ChatGPT)
    • Jessica's real invoice automation workflow: how she turned 6 hours of manual work into an AI agent task
    • The framework for identifying automatable workflows (repetitive, skill-free, multi-step tasks)
    • Why this beats creative AI work: no judgment calls, just execution
    • The Blackboard experiment: what happens when an agent does something you didn't ask it to do
    • Security & trust: passwords, login credentials, and where your data actually goes
    • Enterprise-level agent solutions (and why they're not quite ready yet)
    • The uncomfortable truth: freed-up time doesn't mean fewer hours—it means more output
    • How detailed instruction manuals prepared Jessica for prompt engineering
    • The human bottleneck: why your whole organization has to move at the same speed
    • Why marketing and research are next on the chopping block

    TOOLS MENTIONED:

    • ChatGPT Pro with Agents — https://openai.com/chatgpt/
    • Perplexity Comet (agentic browser) — https://www.perplexity.ai/comet
    • Zoho Billing — https://www.zoho.com/billing/
    • Constant Contact — https://www.constantcontact.com
    • Zapier — https://zapier.com
    • Elicit (systematic reviews & literature analysis) — https://elicit.com
    • Corpus of Contemporary American English — https://www.english-corpora.org/coca/
    • Descript — https://www.descript.com
    • Canva — https://www.canva.com
    • Riverside.fm — https://riverside.fm

    TIMESTAMPS:

    • 0:00 — Opening & guest cancellation
    • 1:18 — Podcast website & jingle development (and why music taste is complicated)
    • 6:34 — What is agentic AI? Jessica's invoice automation example
    • 10:33 — Why this use case actually works
    • 14:15 — The Blackboard incident (when the agent went off-script)
    • 16:21 — Security concerns: passwords, login credentials, and trust
    • 18:35 — Why speed doesn't matter (as long as it's faster than human bottleneck)
    • 19:27 — Enterprise solutions on the horizon
    • 20:57 — United Airlines cease-and-desist letters for replica training sites
    • 22:27 — Why Kimberly can't use agents in her CCRC work
    • 25:21 — How to identify your automatable workflows (the practical framework)
    • 27:57 — Research automation with Elicit & corpus linguistics
    • 30:45 — The core insight: AI shifts time, it doesn't save it
    • 34:10 — Organizational bottlenecks & human capacity limits
    • 35:08 — Pit & Peach (staying in your own canoe)

    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Show More Show Less
    38 mins
  • Once You See It, You Can't Unsee It: The Enshitification of Tech Platforms
    Nov 26 2025

    In this conversation, Kimberly Becker and Jessica Parker explore the concept of 'enshitification'—as articulated by Cory Doctorow in his book Enshittification: Why Everything Suddenly Got Worse and What To Do About It—as it relates to generative AI and tech platforms. They discuss the stages of platform development, the shift from individual users to business customers, and the implications of algorithmic changes on user experience.

    The conversation also explores the work of AI researchers Emily M. Bender and Timnit Gebru, whose paper "On the Dangers of Stochastic Parrots" raised critical questions about the limitations and risks of large language models. The hosts explore the role of data privacy, the impact of AI on labor, the need for regulation, and the dangers of market consolidation, using case studies like Amazon's acquisition and eventual shutdown of Diapers.com and Google's Project Maven controversy.

    Key Takeaways

    • Enshitification refers to the degradation of tech platforms over time
    • The shift from individual users to business customers can lead to worse outcomes for end users
    • Data privacy is a critical concern as companies monetize user interactions
    • AI is predicted to significantly displace workers in coming years
    • Regulation is necessary to protect consumers from unchecked corporate power
    • Market consolidation can stifle competition and innovation
    • Recognizing these patterns is essential for navigating the tech landscape

    Further Reading & Resources

    • Cory Doctorow's Pluralistic blog
    • The Internet Con: How to Seize the Means of Computation
    • 2024 Tech Layoffs Tracker

    Streamlined "Top Links" Version (if you want minimal show notes):

    • Cory Doctorow on Enshittification
    • Enshittification book
    • "On the Dangers of Stochastic Parrots" by Bender & Gebru
    • Amazon/Diapers.com case study
    • Google Project Maven controversy
    • AI job displacement tracker

    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Show More Show Less
    58 mins
  • Maternal AI and the Myth of Women Saving Tech
    Nov 19 2025

    In this conversation, we sit down with Dr. Michelle Morkert, a global gender scholar, leadership expert, and founder of the Women’s Leadership Collective, to unpack the forces shaping women’s relationship with AI.

    We begin with research indicating that women are 20–25% less likely to use AI than men, but quickly move beyond the statistics to explore the deeper social, historical, and structural reasons why.

    Dr. Morkert brings her feminist and intersectional perspective to these questions, offering frameworks that help us see beyond the surface-level narratives of gender and AI use. This conversation is less about “women using AI” and more about power, history, social norms, and the systems we’re all navigating.

    If you’ve ever wondered why AI feels different for women—or what a more ethical, community-driven approach to AI might look like—this episode is for you.

    💬 Guest: Dr. Michelle Morkert – https://www.michellemorkert.com

    📚 Books & Scholarly Works Mentioned

    • Global Evidence on Gender Gaps
      and Generative AI:
      https://www.hbs.edu/ris/Publication%20Files/25023_52957d6c-0378-4796-99fa-aab684b3b2f8.pdf
    • Pink Pilled: Women and the Far Right (Lois Shearing): https://www.barnesandnoble.com/w/pink-pilled-lois-shearing/1144991652l
    • Scary Smart (Mo Gawdat – maternal AI concept)
      https://www.mogawdat.com/scary-smart


    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Show More Show Less
    1 hr and 1 min
  • The Containment Problem: Why AI and Synthetic Biology Can't Be Contained
    Nov 5 2025

    In this episode, Jessica teaches Kimberly about the "containment problem," a concept that explores whether we can actually control advanced technologies like AI and synthetic biology.

    Inspired by Mustafa Suleyman's book The Coming Wave, Jessica and Kimberly discuss why containment might be impossible, the democratization of powerful technologies, and the surprising world of DIY genetic engineering (yes, you can buy a frog modification kit for your garage).

    What We Cover:

    • What is the containment problem and why it matters
    • The difference between AGI, ASI, and ACI
    • Why AI is fundamentally different from nuclear weapons when it comes to containment
    • Synthetic biology: from AlphaFold to $1,099 frog gene editing kits
    • The geopolitical arms race and why profit motives complicate containment
    • How technology democratization gives individuals unprecedented power
    • Whether complete AI containment is even possible (spoiler: probably not)
    • The modern Turing test and why perception might be reality

    Books & Resources Mentioned:

    • Empire of AI by Karen Hao
    • DeepMind documentary

    Key Themes:

    • Technology inevitability vs. choice
    • The challenges of regulating rapidly evolving technologies
    • Who benefits from AI advancement?
    • The tension between innovation and safety


    Follow Women Talking About AI for more conversations exploring the implications, opportunities, and challenges of artificial intelligence.

    Leave us a comment or a suggestion!

    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Show More Show Less
    53 mins
  • Refusing the Drumbeat
    Oct 18 2025

    On saying no to “inevitable” AI—and what we say yes to instead.

    Kimberly and Jessica recently sat down with Melanie Dusseau and Miriam Reynoldson for an episode of Women Talkin’ ’Bout AI. We were especially looking forward to this conversation because Melanie and Miriam are our first guests who openly identify as “AI Resisters.” The timing also felt right. Both Kimberly and I have been reexamining our own stance on AI in education—how it intersects with learning, writing, and creativity—and the more distance we’ve had from running a tech company, the more critical and curious we’ve become.

    This episode digs into big, thorny questions:

    • What Melanie calls “the drumbeat of inevitability” that pressures educators to adopt AI
    • Miriam’s post-digital view of what it means to live in a world completely entangled with technology; and our shared inquiry into who actually benefits when AI tools promise to make everything faster and more efficient.
    • We also talk about data ethics, creative integrity, and the growing movement of educators saying no to automation—not out of fear, but out of care for human learning and connection.

    It’s a thoughtful, challenging, and hopeful conversation—and we hope you enjoy it as much as we did.

    About our guests: Melanie is an Associate Professor of English at the University of Findlay and a writer whose work spans poetry, plays, and fiction. Miriam is a Melbourne-based digital learning designer, educator, and PhD candidate at RMIT University whose research explores the value of learning in times of digital ubiquity.

    Melanie and Miriam are co-authors of the Open Letter from Educators Who Refuse the Call to Adopt GenAI in Education, which has collected over 1,000 signatures and was featured in an article by Forbes. Melanie is also the author of the essay Burn It Down, which advocates for AI resistance in the academy. We highly recommend reading both before diving into the episode.

    1. Melanie's personal website and University of Findlay profile
    2. Miriam’s personal website and blog "Care Doesn't Scale"
    3. Signs Preceding the End of the World by Yuri Herrera
    4. Asimov’s Science Fiction
    5. Ursula K. Le Guin
    6. Ray Bradbury

    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Show More Show Less
    1 hr and 13 mins
  • Hallucinations, Hype, and Hope: Rebecca Fordon on AI in Legal Research
    Oct 11 2025

    In this episode of Women Talkin’ ’Bout AI, we sit down with Rebecca Fordon — law librarian, professor, and board member of the Free Law Project — to talk about how generative AI is transforming legal research, education, and the meaning of “expertise.”

    Rebecca helps us cut through the hype and ask harder questions: What problem are we really trying to solve with AI? Why are we using certain tools, and do we even know what data they’re built on?

    We talk about:

    🔹 How AI is reshaping the practice of legal research and what it means for the next generation of lawyers.
    🔹 Why hallucinated case law and “certainty amplification” reveal deeper problems of trust and transparency.
    🔹 The tension between speed and substance, and how “saving time” can actually shift where thinking happens.
    🔹 The expert pipeline problem: what happens when AI replaces the messy, formative parts of learning?
    🔹 How law librarians (and educators everywhere) are taking on the role of translators, bridging human judgment and machine outputs.
    🔹 The open-access movement in law and how the Free Law Project is democratizing legal data.

    At its heart, this episode is about reclaiming curiosity, caution, and critical thinking in a field that depends on precision, and remembering that faster isn’t always smarter.


    Learn more:
    🔗 Free Law Project: https://free.law

    🔗 AI Law Librarians: https://ailawlibrarians.com

    🔗 Aaron Tay's musings about librarianship: https://musingsaboutlibrarianship.blogspot.com/

    🔗 Refusing GenAI in Writing Studies: A Quickstart Guide: https://refusinggenai.wordpress.com/


    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Show More Show Less
    50 mins