Episodes

  • Spec2TestAI: Stop Defects Before They Reach Production with Missy Trumpler
    Jan 27 2026

    Most teams find defects after the damage is done — during regression, late-stage testing, or production incidents. That's expensive, stressful, and completely avoidable.

    Try Spec2Test AI now: https://testguild.me/spec2testdemo

    In this episode, Joe Colantonio sits down with Missy Trumpler, CEO of AgileAILabs, to explore how Spec2TestAI helps teams prevent defects before code ships by applying AI directly to requirements.

    You'll learn:

    • Why traditional test automation still misses critical risk
    • How predictive, requirements-based AI testing works in practice
    • What "shift-left" actually looks like beyond the buzzword
    • How to reduce escaped defects without writing more tests
    • Why secure, explainable AI matters for QA and enterprise teams

    This conversation is especially valuable for software testers, automation engineers, and QA leaders who want earlier visibility into risk, faster feedback, and higher confidence releases.

    Don't miss Automation Guild 2026 - Register Now: https://testguild.me/podag26

    Show More Show Less
    35 mins
  • Locust Performance Testing with AI and Observability with Lars Holmberg
    Jan 13 2026

    Performance testing often fails for one simple reason: teams can't see where the slowdown actually happens.

    In this episode, we explore Locust load testing and why Python-based performance testing is becoming the go-to choice for modern DevOps, QA, and SRE teams. You'll learn how Locust enables highly realistic user behavior, massive concurrency, and distributed load testing — without the overhead of traditional enterprise tools.

    We also dive into:

    Why Python works so well for AI-assisted load testing

    • How Locust fits naturally into CI/CD and GitHub Actions
    • The real difference between load testing vs performance testing
    • How observability and end-to-end tracing eliminate guesswork
    • Common performance testing mistakes even experienced teams make

    Whether you're a software tester, automation engineer, or QA leader looking to shift-left performance testing, this conversation will help you design smarter tests and catch scalability issues before your users do.

    Show More Show Less
    30 mins
  • Top 8 Automation Testing Trends for 2026 with Joe Colantonio
    Jan 6 2026
    AI testing is everywhere — but clarity isn't. In this episode, Joe Colantonio breaks down the real test automation trends for 2026, based on data from 40,000+ testers, 510 live Q&A questions, and 50+ interviews with industry leaders. This isn't vendor hype or futuristic speculation. It's what working testers are actually worried about — and what they're doing next. You'll learn: Why 72.8% of testers prioritize AI, yet don't trust it alone The real reason AI testing feels harder instead of easier How integration chaos is blocking automation success Why "AI auditor" and "quality strategist" are emerging career paths What agentic AI, MCPs, and vibe testing really mean in practice How compliance, accessibility, and security will redefine QA in 2026 If you're a tester, automation engineer, or QA leader trying to stay relevant — this episode gives you the signal through the noise, and a clear path forward. If you're a software tester, automation engineer, or QA leader looking ahead to 2026, this episode lays out what's coming — and how to stay connected. Discount code: 100GUILDCOIN (https://testguild.me/podag26)
    Show More Show Less
    12 mins
  • Automation Testing Podcast 2026: New Schedule, Events, Discounts with Joe Colantonio
    Dec 28 2025

    This is a special end-of-year episode of the Automation Testing Podcast.

    With family in town and a busy holiday season, Joe didn't want to skip a week without checking in and saying thank you to the TestGuild community.

    In this short episode, Joe shares:

    A huge milestone as the podcast approaches its 13-year anniversary

    Why the Automation Testing Podcast is moving from Sundays to Tuesdays starting in 2026

    How loyal listeners can still get $100 off a full 5-day Automation Guild 2026 pass

    A sneak peek at TestGuild IRL — live, in-person events coming next year

    Gratitude for the listeners, YouTube community, and sponsors who make TestGuild possible

    If you're a software tester, automation engineer, or QA leader looking ahead to 2026, this episode lays out what's coming — and how to stay connected.

    Discount code: 100GUILDCOIN (https://testguild.me/podag26)
    Questions or ideas? Email Joe directly at joe@testguild.com

    As always — test everything, and keep the good.

    Show More Show Less
    2 mins
  • AI Testing LLMs & RAG: What Testers Must Validate with Imran Ali
    Dec 21 2025

    AI is transforming how software is built, but testing AI systems requires an entirely new mindset.

    Don't miss AutomationGuild 2026 - Register Now: https://testguild.me/podag26

    Use code TestGuildPod20 to get 20% off your ticket.

    In this episode, Joe Colantonio sits down with Imran Ali to break down what AI testing really looks like when you're dealing with LLMs, RAG pipelines, and autonomous QA workflows.

    You'll learn:

    Why traditional pass/fail testing breaks down with LLMs

    How to test non-deterministic AI outputs for consistency and accuracy

    Practical techniques for detecting hallucinations, grounding issues, and prompt injection risks

    How RAG systems change the way testers validate AI-powered applications

    Where AI delivers quick wins today—and where human validation still matters

    This conversation goes beyond hype and gets into real-world AI testing strategies QA teams are using right now to keep up with AI-generated code, faster release cycles, and DevOps velocity.

    If you're a tester, automation engineer, or QA leader wondering how AI changes your role,not replaces it,this episode is your roadmap.

    Show More Show Less
    33 mins
  • AI Codebase Discovery for Testers with Ben Fellows
    Dec 14 2025
    What if understanding your codebase was no longer a blocker for great testing? Most testers were trained to work around the code — clicking through UIs, guessing selectors, and relying on outdated docs or developer explanations. In this episode, Playwright expert Ben Fellows flip that model on its head. Using AI tools like Cursor, testers can now explore the codebase directly — asking questions, uncovering APIs, understanding data relationships, and spotting risk before a single test is written. This isn't about becoming a developer. It's about using AI to finally see how the system really works — and using that insight to test smarter, earlier, and with far more confidence. If you've ever joined a new team, inherited a legacy app, or struggled to understand what really changed in a release, this episode is for you. Registration for Automation Guild 2026 Now: https://testguild.me/podag26
    Show More Show Less
    44 mins
  • Gatling Studio: Start Performance Testing in Minutes (No Expertise Required) with Shaun Brown and Stephane Landelle
    Dec 7 2025
    Performance testing has traditionally been one of the hardest parts of QA,slow onboarding, complex scripting, difficult debugging, and too many late-stage surprises. Try Gatling Studio for yourself now: https://links.testguild.com/gatling In this episode, Joe sits down with Stéphane Landelle, creator of Gatling, and Shaun Brown to explore how Gatling is reinventing the load-testing experience. You'll hear how Gatling evolved from a developer-first framework into a far more accessible platform that supports Java, Kotlin, JavaScript/TypeScript, and AI-assisted creation. We break down the thinking behind Gatling Studio, a new companion tool designed to make recording, filtering, correlating, and debugging performance tests dramatically easier. Whether you're a developer, SDET, or automation engineer, you'll learn: How to onboard quickly into performance testing—even without deep expertise Why Gatling Studio offers a smoother way to record traffic and craft tests Where AI is already improving load test authoring How teams can shift-left performance insights and catch issues earlier What's coming next as Gatling expands its developer experience and enterprise platform If you've been meaning to start performance testing—or scale it beyond one performance engineer—this episode will give you the clarity and confidence to begin.
    Show More Show Less
    41 mins
  • AI-Driven Manual Regression: Test Only What Truly Matters With Wilhelm Haaker and Daniel Garay
    Dec 1 2025
    Manual regression testing isn't going away—yet most teams still struggle with deciding what actually needs to be retested in fast release cycles. See how AI can help your manual testing now: https://testguild.me/parasoftai In this episode, we explore how Parasoft's Test Impact Analysis helps QA teams run fewer tests while improving confidence, coverage, and release velocity. Wilhelm Haaker (Director of Solution Engineering) and Daniel Garay (Director of QA) join Joe to unpack how code-level insights and real coverage data eliminate guesswork during regression cycles. They walk through how Parasoft CTP identifies exactly which manual or automated tests are impacted by code changes—and how teams use this to reduce risk, shrink regression time, and avoid redundant testing. What You'll Learn: Why manual regression remains a huge bottleneck in modern DevOps How Test Impact Analysis reveals the exact tests affected by code changes How code coverage + impact analysis reduce risk without expanding the test suite Ways teams use saved time for deeper exploratory testing How QA, Dev, and Automation teams can align with real data instead of assumptions Whether you're a tester, automation engineer, QA lead, or DevOps architect, this episode gives you a clear path to faster, safer releases using data-driven regression strategies.
    Show More Show Less
    39 mins