• The Everything Machine and the Trillion-Dollar Bet
    Apr 29 2026

    What if the story we're being told about AI's inevitability is hiding something underneath? In this episode, Jessica and Kimberly sit down with George Kamide, anthropologist, community builder, and co-host of Bare Knuckles and Brass Tacks, to look past the headlines about the AI bubble and ask who actually has skin in the game.

    This is an episode about following the money, but it is also about following the questions. What is the outcome we actually want from this technology? And what happens to all of us when the people building it cannot answer that?

    Topics Covered

    • Why the dot-com bubble is the wrong analogy for AI infrastructure
    • How special purpose vehicles and obfuscatory financing hide AI debt
    • The Magnificent Seven and concentration risk in the S&P 500
    • Taiwan, TSMC, and the helium supply chain most people have never heard of
    • The "everything machine" promise and why it cannot pay for itself
    • Why an AI crash could starve the narrowly-focused applications that actually work
    • The labor reorganization problem and why generalists may win
    • What chatbot tutors get wrong about teaching
    • Mythos, the open source ecosystem, and concentration of access to powerful tools
    • Why we keep analogizing ourselves to whatever technology we just built

    Referenced in This Episode

    • George Kamide and Bare Knuckles and Brass Tacks
    • Ed Zitron's reporting on AI infrastructure at Where's Your Ed At, including The Hater's Guide to the AI Bubble and AI Bubble 2027
    • Paul Kedrosky's analysis at Honey, AI Capex is Eating the Economy, which compares the AI buildout to past infrastructure booms
    • David Shapiro's earlier appearance on the show, Beyond Work: Post-Labor Economics
    • DeepLeaf, the Moroccan agritech company using AI to help small farmers detect crop disease
    • The MIT Antibiotics-AI Project that used deep learning to discover a new structural class of antibiotics against MRSA
    • Khan Academy's Khanmigo and the recent reckoning with the limits of LLM-based tutoring
    • Raffi Krikorian, CTO of Mozilla, and his New York Times op-ed It's the End of the Internet as We Know It on Mythos and open source access
    • Michael Pollan's new book A World Appears: A Journey into Consciousness

    Leave us a comment or a suggestion!

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show More Show Less
    1 hr
  • AI-Generated Deepfake Porn and the Fight for Accountability: It's About Power not Sex
    Apr 22 2026

    Episode Summary

    In this episode, Kimberly and Jessica dig into the rising crisis of AI-generated deepfake non-consensual intimate imagery (NCII), and why it's not really a technology story. It's a power story. From a class action lawsuit against Elon Musk's xAI/Grok to a history of technology being used to harm women dating back to the printing press, this conversation situates deepfake porn within a long pattern of systems failing to protect women and girls at scale.

    They discuss a New York Times op-ed about a lawsuit involving three Tennessee teenagers whose yearbook photos were used to generate sexually explicit images and what the outcome of that case could mean for tech accountability. They also cover what parents can do, why law enforcement is struggling to keep up, and where to turn if you or someone you know has been victimized.

    In this episode:

    • What deepfakes are, and why "it's not real" doesn't reduce the harm
    • The xAI/Grok class action lawsuit and the co-creator legal argument
    • A quick history lesson: from the printing press to Facebook's origins as "FaceMash"
    • Why the barrier to entry is the real game-changer
    • What Elon Musk says about it — and why critics aren't buying it
    • Open-source models with no guardrails
    • The Take It Down Act and state-level deepfake legislation
    • Resources for victims and what watermarking can and can't do
    • Why talking to your kids matters (and why they probably know more than you)

    Resources and Links

    Primary episode sources:

    • New York Times op-ed: Deepfake Nudes Are Harming Teens
    • AP News: xAI/Grok lawsuit coverage
    • Lieff Cabraser on the NYT op-ed and the lawsuit

    Victim resources:

    • StopNCII.org
    • Sensity AI

    Legislation and policy:

    • The Take It Down Act (Latham & Watkins summary)
    • State deepfake legislation tracker — Public Citizen

    Context and background:

    • Understood: Deepfake Porn Empire (Apple Podcasts)
    • Understood: Deepfake Porn Empire (Spotify)
    • University College Cork: Deepfake Real Harms — Six Myths
    • AlgorithmWatch: Spain schoolboys and AI-generated fake nudes
    • Laura Bates, The New Age of Sexism
    • Brotopia by Emily Chang
    • Gilded Rage by Jacob Silverman

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show More Show Less
    47 mins
  • AI Took the Doubt Out of the Writing. That's the Problem.
    Apr 15 2026

    Kimberly Becker joins George and George on the Bare Knuckles and Brass Tacks podcast to talk about what our research is revealing about the language AI produces and what it means for the rest of us.

    Topics Covered

    • How Kimberly's research compared AI-generated abstracts to human-written ones in nursing journals and what the key linguistic differences were
    • Why AI text tends to be informationally dense, formulaic, and stripped of hedging language
    • The Porter and Jick letter and how a five-sentence note helped fuel the opioid epidemic through citation chaining
    • What happens when AI scales the same kind of telephone game with scientific evidence
    • How algorithmic silos and certainty amplification may be eroding our tolerance for nuance
    • The difference between accuracy and complexity in writing, and why polished text is not the same as deep thinking
    • Why smaller, well-vetted language models may produce better outcomes than massive ones trained on internet slop
    • Neil Postman's idea that writing "freezes speech" and what that means in an era when fewer people are doing their own writing

    Referenced in This Episode

    • Bare Knuckles and Brass Tacks podcast
    • The Porter and Jick letter (1980) on opioid addiction
    • Neil Postman, Amusing Ourselves to Death
    • James Marriott's essay on the post-literate society
    • Derek Thompson, "The Decline of Thinking" (The Atlantic)
    • OpenAI's Prism research tool

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show More Show Less
    38 mins
  • Depth is the Human Edge
    Apr 8 2026

    Jessica and Kimberly just had a paper accepted for publication in Frontiers in Education. So today, they're sharing what they've learned.

    The big idea is that AI is not a neutral tool. It's a cultural intermediary. Just like a human translator doesn't swap words one for one, AI mediates the way we understand the world. It shapes what we write, what we trust, and what we treat as true. And most of us have no idea that's happening.

    They walk through the research behind their framework, talk about what AI actually does well (fluency and accuracy), and where it falls short (depth, nuance, relational intelligence). And they share real examples from their work that show what it looks like when we hand over too much of our thinking to a machine.

    Topics Covered

    • What it means to treat AI as a cultural intermediary and why that framing changes everything
    • The difference between accuracy, fluency, and depth in writing, and why AI can only get you so far
    • How the same consulting firm that charged thousands of dollars produced a report that ChatGPT could replicate in minutes
    • What a capability map for AI literacy looks like, from emerging to proficient
    • Why relational intelligence is the human edge that AI cannot replicate
    • How AI is widening the distance between people and what we lose when we stop talking to each other
    • The social media influencer as a double intermediary, and what that means for kids whose brains aren't fully developed yet
    • Why publishing in an AI-focused field is its own kind of pit

    Referenced in This Episode

    • The "Attention Is All You Need" paper and the transformer architecture
    • Timnit Gebru and the Stochastic Parrots paper
    • Taylor & Francis and the $75 million content licensing deal with AI companies

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show More Show Less
    58 mins
  • Leading When You Can't Know What's Real: AI, Uncertainty, and the Limits of Expertise
    Apr 1 2026

    Jessica and Kimberly sit down with Rebecca Bultsma, an AI ethics researcher completing her dissertation in Data and AI Ethics at the University of Edinburgh, keynote speaker, and Chief Innovation Officer with a background in communication strategy and leadership consulting.

    They invited Rebecca to dig into one of the most unsettling questions of this moment: how do we make decisions when we can never be certain what is real? From deepfake videos circulating in school districts to voice cloning in courtrooms, Rebecca's research follows leaders into the places where the old rules no longer apply and asks what they are actually drawing on when the evidence itself cannot be trusted. She shares the concept of aporia, that frustrated, in-between state of not knowing, and makes the case that sitting with uncertainty is not a weakness. It is where real learning begins.

    Topics Covered

    • What aporia is and why it might be the most honest description of how we all feel about AI right now
    • How K-12 leaders are making high-stakes decisions when video evidence can no longer be verified
    • Why AI detection tools are failing students, teachers, and the humans tasked with enforcing academic integrity
    • The gap between how fast deepfake technology is developing and how fast detection can keep up
    • What watermarking can and cannot do, and how easy it is to work around
    • Why Rebecca thinks we are heading back toward a more oral society
    • Prompt baiting, AI burnout, and the research emerging around cognitive overload
    • Using AI as an accountability partner rather than a ghostwriter
    • What kids are seeing on social media that adults are missing

    Referenced in This Episode

    • rebeccabultsma.com
    • Forbes: "AI Ethicist Explains How to Humanize AI in the Care Economy" (March 2026)
    • The Brookings Institute report on AI and student expectations
    • Dr. Rachel Wood on AI and human relationships

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show More Show Less
    1 hr and 10 mins
  • Data Annotation: The Human Labor Behind AI with Heather Mellquist Lehto, PhD
    Mar 24 2026

    Jessica and Kimberly sit down with Heather Mellquist Lehto, PhD.

    Heather is a mathematician, anthropologist, former Harvard faculty, Vatican AI advisor, and founder of Guilded AI. They asked her to pull back the curtain on data annotation: the human labor that makes AI possible and one of the least visible, least understood, and most exploited parts of the entire industry. From pennies-per-task gig work to expert PhDs clicking through unpaid tests, they dig into who is actually building these models, what they are being paid, and why the workers creating billions in value are locked out of the wealth they generate. Heather shares why she got fed up with the recruiting playbook, what she is building differently at Gilded AI, and why treating workers well is not just an ethical argument but a data quality one.

    Topics Covered:

    • What data annotation is and why it still requires human expertise at every level of AI development
    • The difference between data annotation and reinforcement learning from human feedback
    • How workers go from labeling apples to annotating molecular structures and advanced mathematics
    • Why the effective hourly rate for data annotators is much lower than advertised
    • Scale AI, the $29 billion valuation, and the Department of Labor investigation
    • How Guilded AI is structuring equity so annotators share in the upside
    • Garbage in, garbage out: why worker treatment is a data quality issue
    • AI chatbot vibe checks as expert vetting, and why that fails everyone
    • The Gilded Age, guilds, and what banding together could look like
    • Why the perfect cannot be the enemy of the good

    Referenced in This Episode:

    • Empire of AI by Karen Hao
    • The Worlds I See by Fei-Fei Li
    • Surveillance Capitalism by Shoshana Zuboff
    • Rerum Novarum by Pope Leo XIII
    • Guilded AI
    • Scale AI and the Meta investment

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show More Show Less
    1 hr and 19 mins
  • The Soft Skills Aren't Soft: Relational Intelligence, Workplace Culture, and What AI Can't Replace
    Mar 18 2026

    What does it mean to do meaningful work? And what happens to that meaning when AI enters the picture?

    This week we're joined by Valerie Morris, co-host of the podcast Inside Work and Relational Intelligence chapter lead at Culture First. Valerie works with employees and organizations navigating the human side of AI adoption, and she brings both an organizational psychology perspective and a practitioner's honesty to a conversation that gets personal quickly.

    We talk about why so many employees feel they can't voice real concerns about how AI is being rolled out, why the skills that create meaning at work (connection, relational intelligence, the ability to just be present with another person) are exactly the ones being sidelined in the rush to automate, and what it looks like to push back on that, quietly and practically, even when you can't change the culture around you.

    Woven through all of it is a question the three of us keep circling: What are we willing to give up in the name of efficiency?

    None of it is anti-AI exactly. It's more like a case for paying attention to what you're trading away.

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show More Show Less
    1 hr and 1 min
  • Is Anyone Steering This Thing? Clara Hawking on AI Governance
    Mar 11 2026

    AI governance sounds like something for IT departments and government committees. It's not. According to computer scientist, philosopher, and AI governance expert Clara Hawking, it's really about behavior — how we use technology, who gets harmed when we use it carelessly, and whether the systems we're building deserve our trust.

    In this episode, Clara breaks down what AI governance actually looks like in practice ... including a professor who unknowingly violated GDPR by grading students through his personal ChatGPT account, to the risks that compound (not just add up) when AI, biotech, robotics, and quantum computing start feeding into each other. We also get personal about what it means to govern ourselves first, before we can ask anything of institutions.

    If you've ever seen the words "AI governance" and assumed it had nothing to do with you — this one's for you.

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show More Show Less
    1 hr