Episodes

  • AI Codebase Discovery for Testers with Ben Fellows
    Dec 14 2025
    What if understanding your codebase was no longer a blocker for great testing? Most testers were trained to work around the code — clicking through UIs, guessing selectors, and relying on outdated docs or developer explanations. In this episode, Playwright expert Ben Fellows flip that model on its head. Using AI tools like Cursor, testers can now explore the codebase directly — asking questions, uncovering APIs, understanding data relationships, and spotting risk before a single test is written. This isn't about becoming a developer. It's about using AI to finally see how the system really works — and using that insight to test smarter, earlier, and with far more confidence. If you've ever joined a new team, inherited a legacy app, or struggled to understand what really changed in a release, this episode is for you. Registration for Automation Guild 2026 Now: https://testguild.me/podag26
    Show More Show Less
    44 mins
  • Gatling Studio: Start Performance Testing in Minutes (No Expertise Required) with Shaun Brown and Stephane Landelle
    Dec 7 2025
    Performance testing has traditionally been one of the hardest parts of QA,slow onboarding, complex scripting, difficult debugging, and too many late-stage surprises. Try Gatling Studio for yourself now: https://links.testguild.com/gatling In this episode, Joe sits down with Stéphane Landelle, creator of Gatling, and Shaun Brown to explore how Gatling is reinventing the load-testing experience. You'll hear how Gatling evolved from a developer-first framework into a far more accessible platform that supports Java, Kotlin, JavaScript/TypeScript, and AI-assisted creation. We break down the thinking behind Gatling Studio, a new companion tool designed to make recording, filtering, correlating, and debugging performance tests dramatically easier. Whether you're a developer, SDET, or automation engineer, you'll learn: How to onboard quickly into performance testing—even without deep expertise Why Gatling Studio offers a smoother way to record traffic and craft tests Where AI is already improving load test authoring How teams can shift-left performance insights and catch issues earlier What's coming next as Gatling expands its developer experience and enterprise platform If you've been meaning to start performance testing—or scale it beyond one performance engineer—this episode will give you the clarity and confidence to begin.
    Show More Show Less
    41 mins
  • AI-Driven Manual Regression: Test Only What Truly Matters With Wilhelm Haaker and Daniel Garay
    Dec 1 2025
    Manual regression testing isn't going away—yet most teams still struggle with deciding what actually needs to be retested in fast release cycles. See how AI can help your manual testing now: https://testguild.me/parasoftai In this episode, we explore how Parasoft's Test Impact Analysis helps QA teams run fewer tests while improving confidence, coverage, and release velocity. Wilhelm Haaker (Director of Solution Engineering) and Daniel Garay (Director of QA) join Joe to unpack how code-level insights and real coverage data eliminate guesswork during regression cycles. They walk through how Parasoft CTP identifies exactly which manual or automated tests are impacted by code changes—and how teams use this to reduce risk, shrink regression time, and avoid redundant testing. What You'll Learn: Why manual regression remains a huge bottleneck in modern DevOps How Test Impact Analysis reveals the exact tests affected by code changes How code coverage + impact analysis reduce risk without expanding the test suite Ways teams use saved time for deeper exploratory testing How QA, Dev, and Automation teams can align with real data instead of assumptions Whether you're a tester, automation engineer, QA lead, or DevOps architect, this episode gives you a clear path to faster, safer releases using data-driven regression strategies.
    Show More Show Less
    39 mins
  • Top Automation Guild Survey Insights for 2026 with Joe Colantonio
    Nov 24 2025

    Automation Guild turns 10 this year, and the 2026 survey revealed some of the strongest trends and signals the testing community has ever shared.

    Register now: https://testgld.link/ag26reg

    In this episode, Joe breaks down the most important insights shaping Automation Guild 2026 and what they mean for testers, automation engineers, and QA leaders.

    You'll hear why AI-powered testing is dominating every category, why Playwright has officially become the tool testers want most, the challenges that continue to follow teams year after year, and how testers are navigating shrinking teams, faster releases, and rising expectations.

    This episode gives you a clear, data-driven snapshot of why Automation Guild 2026 matters — and how this year's event is designed to help you stay relevant, sharpen your skills, and tackle the problems that keep slowing down teams.

    Perfect for anyone considering joining the Guild, planning their 2026 automation strategy, or just trying to make sense of the rapid changes happening in testing today.

    Show More Show Less
    9 mins
  • Testing AI Vibe Coding: Stop Vulnerabilities Early with Sarit Tager
    Nov 16 2025
    AI is accelerating software delivery, but it's also introducing new security risks that most developers and automation engineers never see coming. In this episode, we explore how AI-generated code can embed vulnerabilities by default, how "vibe coding" is reshaping developer workflows, and what teams must do to secure their pipelines before bad code reaches production. You'll learn how to prompt more securely, how guardrails can stop vulnerabilities at generation time, how to prioritize real risks instead of false positives, and how AI can be used to protect your applications just as effectively as attackers use it to exploit them. Whether you're using Cursor, Copilot, Playwright MCP, or any AI tool in your automation workflow, this conversation gives you a clear roadmap for staying ahead of AI-driven vulnerabilities — without slowing down delivery. Featuring Sarit Tager, VP of Product for Application Security at Palo Alto Networks, who reveals real-world insights on securing AI-generated code, understanding modern attack surfaces, and creating a future-proof DevSecOps strategy.
    Show More Show Less
    32 mins
  • 4 Free TestGuild Tools Every Tester Should Be Using with Joe Colantonio
    Nov 9 2025

    In this solo episode, Joe Colantonio shares four powerful free TestGuild tools designed to help testers, automation engineers, and QA leaders work smarter. Discover how to instantly find the right testing tool for your team, assess automation risk, check your site's accessibility, and benchmark your automation maturity — all in one session.

    Whether you're looking to improve test coverage, adopt better practices, or simply save time, these tools were built with you in mind.

    What You'll Learn:
    – How to choose the right test automation tool fast
    – How to identify and reduce testing risk
    – How to check your site's accessibility compliance
    – How to assess your team's automation maturity level

    Try the tools free:

    Tool Matcher: https://testgld.link/toolmatcher
    Accessibility Scanner: https://testgld.link/scanner
    Risk Calc: https://testgld.link/riskcalc
    Automation Readiness Quiz: https://testgld.link/scorequiz

    ️ Join us for the 10th Annual Automation Guild Conference: https://testgld.link/IrHaNIVX

    Show More Show Less
    17 mins
  • AI Testing Made Trustworthy using FizzBee
    Nov 2 2025
    As AI tools like Copilot, Claude, and Cursor start writing more of our code, the biggest challenge isn't generating software — it's trusting it. In this episode, JP (Jayaprabhakar) Kadarkarai, founder of FizzBee, joins Joe Colantonio to explore how autonomous, model-based testing can validate AI-generated software automatically and help teams ship with confidence. FizzBee uses a unique approach that connects design, code, and behavior into one continuous feedback loop — automatically testing for concurrency issues and validating that your implementation matches your intent. You'll discover: Why AI-generated code can't be trusted without validation How model-based testing works and why it's crucial for AI-driven development The difference between example-based and property-based testing How FizzBee detects concurrency bugs without intrusive tracing Why autonomous testing is becoming mandatory for the AI era Whether you're a software tester, DevOps engineer, or automation architect, this conversation will change how you think about testing in the age of AI-generated code.
    Show More Show Less
    32 mins
  • Test Automation Optimus Prime Halloween Special
    Oct 21 2025

    In this Halloween special, Joe Colantonio and Paul Grossman discuss the evolution of automation testing, focusing on the integration of AI tools, project management strategies, and the importance of custom logging. Paul shares insights from his recent job experience, detailing how he inherited a project and the challenges he faced. Paul also goes over his Optimus Prime framework and uses it to explore various automation tools, the significance of dynamic waiting, and how to handle test case collisions. The discussion also highlights the role of AI in enhancing automation frameworks and the importance of version control in software development.

    Show More Show Less
    42 mins