Episodes

  • Gen AI Threat Modeling vs. AI-Powered Defense:
    Jul 31 2025

    Is generative AI a security team's greatest new weapon or its biggest new vulnerability? This episode dives headfirst into the debate with two leading experts on opposite sides of the AI dragon. We 1st published this episode on Cloud Security Podcast and because of the feedback we received from those diving into all things AI Security, we wanted to bring it to those who haven't probably had the chance to hear it yet on this podcast.


    On one side, discover how to leverage and "tame" AI for your defense. Jackie Bow explains how Anthropic uses its own powerful LLM, Claude, to revolutionize threat detection and response. Learn how AI can be used to:

    • Build investigation and triage tools with incredible speed.

    • Break free from the "black box" of traditional security tools, offering more visibility and control.

    • Creatively "hallucinate" within set boundaries to uncover investigative paths a human might miss.

    • Lower the barrier to entry for security professionals, enabling them to build prototypes and tools without deep coding expertise.

    On the other side, Kane Narraway provides a masterclass in threat modeling the new landscape of AI systems. He argues that while AI introduces new challenges, many are amplifications of existing SaaS risks. This conversation covers the critical aspects of securing AI, including:

    • Why access, integrations, and authorization are the biggest risk factors in enterprise AI.

    • How to approach threat modeling for both in-house and third-party AI tools.

    • The security challenges of emerging standards like MCP (Meta-Controller Protocol) and the importance of securing the data AI tools can access.

    • The critical need for security teams to adopt AI to keep pace with modern engineering departments.


    Questions asked:

    (00:00) Intro: Slaying or Training the AI Dragon at BSidesSF?(02:22) Meet Jackie Bow (Anthropic): Training AI for Security Defense(02:51) Meet Kane Narraway (Canva): Securing AI Systems & Facing Risks(03:49) Was Traditional Security Ops "Hot Garbage"? Setting the Scene(05:57) The Real Risks: What AI Brings to Your Organisation(06:53) AI in Action: Leveraging AI for Threat Detection & Response(07:46) AI Hallucinations: Bug, Feature, or Security Blind Spot?(08:55) Threat Modeling AI: The Core Challenges & Learnings(12:26) Getting Started: Practical AI Threat Detection First Steps(16:42) AI & Cloud: Integrating AI into Your Existing Environments(25:21) AI vs. Traditional: Is Threat Modeling Different Now?(28:34) Your First Step: Where to Begin with AI Threat Modeling?(31:59) Fun Questions & Final Thoughts on the Future of AI Security


    Resources

    BSidesSF 2025 - AI's Bitter Lesson for SOCs: Let Machines Be Machines
    BSidesSF 2025 - One Search To Rule Them All: Threat Modelling AI Search

    Show More Show Less
    36 mins
  • Vibe Coding for CISOs: Managing Risk & Opportunity in AI Development
    Jun 27 2025

    What happens when your product, sales, and marketing teams can build and deploy their own applications in a matter of hours? This is the new reality of "Vibe Coding," and for CISOs, it represents both a massive opportunity for innovation and a significant governance challenge.

    In this episode, join Ashish Rajan and Caleb Sima as they move beyond the hype to provide a strategic playbook for security leaders navigating the world of AI-assisted development. Learn how Vibe Coding empowers non-engineers to solve business problems and how you can leverage it to rapidly prototype security solutions yourself. Get strategies to handle the inevitable influx of AI-generated applications from across the business without overwhelming your engineering and security teams.

    • Understanding the Core Opportunity
    • Assessing the Real-World Output
    • Managing the "Shadow Prototype" Risk
    • Building Proactive Guardrails
    • Architecting for Safety


    For more episodes like this go to www.aisecuritypodcast.com


    Questions asked:

    (00:00) Why Vibe Coding is a C-Suite Issue

    (02:34) The Strategic Advantage of Hands-On AI

    (04:20) Your AI Development Toolkit: Where to Start

    (12:08 Choosing Your First Project: A Framework for Success

    (16:46) The CISO as an AI Engineering Manager: A Step-by-Step Workflow

    (31:32) A Surprising Security Finding: AI and Least Privilege

    (36:47) Augmenting AI with Agents and Live Data

    (38:50) Beyond Code: AI Agents for Business Automation (Zapier, etc.)

    (43:30) The "Production Ready" Problem: Who Owns the Code?

    (53:25) A CISO's Playbook for Governing AI Development


    Resources spoken about during the episode:

    AI Native Landscape - Tools

    Cline

    Roo-Code

    Visual Studio Code

    Windsurf

    Bolt.new

    Aider

    v0 - Vercel

    Lovable

    Claude Code

    ChatGPT

    Show More Show Less
    1 hr
  • Vibe Coding, Slopsquatting, and the Future of AI in Software Development
    Jun 12 2025

    In this episode, we welcome back Guy Podjarny, founder of Snyk and Tessl, to explore the evolution of AI-assisted coding. We dive deep into the three chapters of AI's impact on software development, from coding assistants to the rise of "vibe coding" and agentic development.

    Guy explains what "vibe coding" truly is, a term coined by Andrej Karpathy where developers delegate more control to AI, sometimes without even reviewing the code. We discuss how this opens the door for non-coders to create real applications but also introduces significant risks.

    Caleb, Ashish and Guy discuss:

    • The Three Chapters of AI-Assisted Coding: The journey from simple code completion to full AI agent-driven development.
    • Vibe Coding Explained: What is it, who is using it, and why it's best for "disposable apps" like prototypes or weekend projects.
    • A New Security Threat - Slopsquatting: Discover how LLMs can invent fake library names that attackers can exploit, a risk potentially greater than typosquatting.
    • The Future of Development: Why the focus is shifting from the code itself—which may become disposable—to the importance of detailed requirements and rigorous testing.
    • The Developer as a Manager: How the role of an engineer is evolving into managing AI labor, defining specifications, and overseeing workflows


    Questions asked:

    (00:00) The Evolution of AI Coding Assistants(05:55) What is Vibe Coding?(08:45) The Dangers & Opportunities of Vibe Coding(11:50) From Vibe Coding to Enterprise-Ready AI Agents(16:25) Security Risk: What is "Slopsquatting"?(22:20) Are Old Security Problems Just Getting Bigger?(25:45) Cloud Sprawl vs. App Sprawl: The New Enterprise Challenge(33:50) The Future: Disposable Code, Permanent Requirements(40:20) Why AI Models Are Getting So Good at Understanding Your Codebase(44:50) The New Role of the AI-Native Developer: Spec & Workflow Manager(46:55) Final Thoughts & Favorite Coding Tools


    Resources spoken about during the episode:

    AI Native Dev Community

    Tessl

    Cursor

    Bolt

    BASE44

    Vercel

    Show More Show Less
    49 mins
  • AI in Cybersecurity: Phil Venables (Formerly Google Cloud CISO) on Agentic AI & CISO Strategy
    Jun 6 2025

    Dive deep into the evolving landscape of AI in Cybersecurity with Phil Venables, former Chief Information Security Officer at Google Cloud and a cybersecurity veteran with over 30 years of experience. Recorded at RSA, this episode explores the critical shifts and future trends shaping our industry.

    Caleb, Ashish and Phil speak about

    • The journey from predictive AI to the forefront of Agentic AI in enterprise environments.
    • How organizations are transitioning AI from experimental prototypes to impactful production applications.
    • The three essential pillars of AI control for CISOs: software lifecycle risk, data governance, and operational risk management.
    • Current adversarial uses of AI and the surprising realities versus the hype.
    • Leveraging AI to combat workforce skill shortages and boost productivity within security teams.
    • The rise of "Vibe Coding" and how AI is transforming software development and security.
    • The expanding role of the CISO towards becoming a Chief Digital Risk Officer.
    • Practical advice for security teams on adopting AI for security operations automation and beyond.


    Questions asked:

    (00:00) - Intro: AI's Future in Cybersecurity with Phil Venables

    (00:55) - Meet Phil Venables: Ex-Google Cloud CISO & Cyber Veteran

    (02:59) - AI Security Now: Navigating Predictive, Generative & Agentic AI

    (04:44) - AI: Beyond the Hype? Real Enterprise Adoption & Value

    (05:49) - Top CISO Concerns: Securing AI in Production Environments

    (07:02) - AI Security for All: Advice for Smaller Organizations (Hint: Platforms!)

    (09:04) - CISOs' AI Worries: Data Leakage, Prompt Injection & Deepfakes?

    (12:53) - AI Maturity: Beyond Terminator Fears to Practical Guardrails

    (14:45) - Agentic AI in Action: Real-World Enterprise Deployments & Use Cases

    (15:56) - Securing Agentic AI: Building Guardrails & Control Planes (Early Days)

    (22:57) - Future-Proof Your Security Program for AI: Key Considerations

    (25:13) - LLM Strategy: Single vs. Multiple Models for AI Applications

    (28:26) - "Vibe Coding": How AI is Revolutionizing Software Development for Leaders

    (32:21) - Security Implications of AI-Generated Code & "Shift Downward"

    (37:22) - Frontier Models & Shared Responsibility: Who Secures What?

    (39:07) - AI Adoption Hotbeds: Which Security Teams Are Leading the Way? (SecOps First!)

    (40:20) - AI App Sprawl: Managing Risk in a World of Custom, AI-Generated Apps

    Show More Show Less
    45 mins
  • Is Your Browser the Biggest AI Security Risk?
    May 29 2025

    Are you overlooking the most critical piece of real estate in your enterprise security strategy, especially with the rise of AI? With 90% or more of employee work happening inside a browser, it's becoming the new operating system and the primary entry point for AI agents.

    In this episode, Ashish and Caleb dive deep into the world of Enterprise Browsers. They explore why this often-underestimated technology is set to disrupt how AI agents operate and why it should be top-of-mind for every security leader.

    Join us as we cover:

    • What are Enterprise Browsers? Understanding these Chromium-based, standalone browsers.
    • Who are the Key Players? A look at companies like Island Security and Talon Security (now Palo Alto).
    • Why Now? How browsers became the de facto OS and the prime spot for AI integration.
    • The Power of Control: Exploring benefits like built-in DLP (Data Loss Prevention), Zero Trust capabilities, policy enforcement, and BYOD enablement.
    • Beyond Security: How enterprise browsers can inject features and modify permissions without backend dev work.
    • AI Agents in Action: How AI will leverage browsers for automation and the security challenges this presents.
    • The Future Outlook: Predictions for AI-enabled browsers and the coming wave of browser-focused AI security startups.

    Whether you're skeptical or already exploring browser security, this conversation offers valuable insights into managing AI agents and securing your organization in an increasingly browser-centric, AI-driven world.


    Questions asked:

    (00:00) Intro: Why Enterprise Browsers are Crucial for AI Agents(01:50) Why Discuss Enterprise Browsers on an AI Cybersecurity Podcast?(02:20) The Browser is the New OS: 99% of Time Spent (03:00) AI Agents' Easiest Entry Point: The Browser (03:30) Example: How an AI Agent Automates Tasks via Browser (04:30) The Scope: Intranet, SaaS, and 60% of Employee Activity (06:50) OpenAI's Operator Demo & Browser Emulation (07:45) Overview: What are Enterprise Browsers? (Vendors & Purpose) (08:50) Key Players: Talon (Palo Alto) & Island Security (09:30) Benefit 1: Built-in DLP & Visibility (10:10) Benefit 2: Zero Trust Capabilities (10:40) Benefit 3: Policy, Compliance & Password Management (11:00) Use Case: BYOD & Contractors (Replacing Virtual Desktops?) (13:10) Why Not Firefox or Edge? The Power of Chromium (16:00) Budgeting Challenge: Why Browser Security is Often Overlooked (17:00) The Rise of AI Browser Plugins & Startups (19:30) The Hidden Risk: Existing Chrome Plugin Dangers (23:45) Why Did OpenAI Want to Buy Chrome? (25:00) Devil's Advocate: Can Enterprise Browsers Stop OWASP Top 10? (27:06) Example: AI Agent Ordering Flowers via Browser Extension (29:00) How AI Agents Gain Power via Browser Extensions (30:15) Prediction: What AI Browser Security Startups will look like at RSA 2026? (31:30) Skepticism: Will Enterprises Really Fund Browser Security? (SSPM Lessons) (34:00) The #1 Benefit You Don't Know: Injecting Features Without Code! (34:45) Example: Masking PII & Adding 2FA via Enterprise Browser (38:15) Monitoring AI Agents: Browser as a "Man-in-the-Middle" (40:00) The "AI Version of Chrome": A Future Consumer Product? (42:15) Personal vs. Professional: The Blurring Lines in Browser Use (44:15) Final Predictions & The Cybersecurity Gap (45:00) Final Thoughts & Wrap Up

    Show More Show Less
    46 mins
  • AI Red Teaming & Securing Enterprise AI
    May 16 2025

    As AI systems become more integrated into enterprise operations, understanding how to test their security effectively is paramount.

    In this episode, we're joined by Leonard Tang, Co-founder and CEO of Haize Labs, to explore how AI red teaming is changing.

    Leonard discusses the fundamental shifts in red teaming methodologies brought about by AI, common vulnerabilities he's observing in enterprise AI applications, and the emerging risks associated with multimodal AI (like voice and image processing systems). We delve into the intricacies of achieving precise output control for crafting sophisticated AI exploits, the challenges enterprises face in ensuring AI safety and reliability, and practical mitigation strategies they can implement.

    Leonard shares his perspective on the future of AI red teaming, including the critical skills cybersecurity professionals will need to develop, the potential for fingerprinting AI models, and the ongoing discussion around protocols like MCP.


    Questions asked:

    • 00:00 Intro: AI Red Teaming's Evolution
    • 01:50 Leonard Tang: Haize Labs & AI Expertise
    • 05:06 AI vs. Traditional Red Teaming (Enterprise View)
    • 06:18 AI Quality Assurance: The Haize Labs Perspective
    • 08:50 AI Red Teaming: Real-World Application Examples
    • 10:43 Major AI Risk: Multimodal Vulnerabilities Explained
    • 11:50 AI Exploit Example: Voice Injections via Background Noise
    • 15:41 AI Vulnerabilities & Early XSS: A Cybersecurity Analogy
    • 20:10 Expert AI Hacking: Precisely Controlling AI Output for Exploits
    • 21:45 The AI Fingerprinting Challenge: Identifying Chained Models
    • 25:48 Fingerprinting LLMs: The Reality & Detection Difficulty
    • 29:50 Top Enterprise AI Security Concerns: Reputation & Policy
    • 34:08 Enterprise AI: Model Choices (Frontier Labs vs. Open Source)
    • 34:55 Future of LLMs: Specialized Models & "Hot Swap" AI
    • 37:43 MCP for AI: Enterprise Ready or Still Too Early?
    • 44:50 AI Security: Mitigation with Precise Input/Output Classifiers
    • 49:50 Future Skills for AI Red Teamers: Discrete Optimization


    Resources discussed during the episode:

    Baselines for Watermarking Large Language Models

    Haize Labs

    Show More Show Less
    53 mins
  • RSA Conference 2025 Recap: Agentic AI Hype, MCP Risks & Cybersecurity's Future
    May 9 2025

    Caleb and Ashish cut through the Agentic AI hype, expose real MCP (Multi-Cloud Platform) risks, and discuss the future of AI in cybersecurity. If you're trying to understand what really happened at RSA and what it means for the industry, you would want to hear this.

    In this episode, Caleb Sima and Ashish Rajan dissect the biggest themes from RSA, including:

    • Agentic AI Unpacked: What is Agentic AI really, beyond the marketing buzz?
    • MCP & A2A Deployment Dangers: MCPs are exploding, but how do you deploy them safely across an enterprise without slowing down business?
    • AI & Identity/Access Management: The complexities AI introduces to identity, authenticity, and authorization.
    • RSA Innovation Sandbox Insights
    • Getting Noticed at RSA: What marketing strategies actually work to capture attention from CISOs and executives at a massive conference like RSA?
    • The Current State of AI Security Knowledge


    Questions asked:

    (00:00) Introduction

    (02:44) RSA's Big Theme: The Rise of Agentic AI

    (09:07) Defining Agentic AI: Beyond Basic Automation

    (12:56) AI Agents vs. API Calls: Clarifying the Confusion

    (17:54) AI Terms Explained: Inference vs. User Inference

    (21:18) MCP Deployment Dangers: Identifying Real Enterprise Risks

    (25:59) Managing MCP Risk: Practical Steps for CISOs

    (29:13) MCP Architecture: Understanding Server vs. Client Risks

    (32:18) AI's Impact on Browser Security: The New OS?

    (36:03) AI & Access Management: The Identity & Authorization Challenge

    (47:48) RSA Innovation Sandbox 2025: Top Startups & Winner Insights

    (51:40) Marketing That Cuts Through: How to REALLY Get Noticed at RSA

    Show More Show Less
    1 hr and 3 mins
  • MCP vs A2A Explained: AI Agent Communication Protocols & Security Risks
    Apr 18 2025

    Dive deep into the world of AI agent communication with this episode. Join hosts Caleb Sima and Ashish Rajan as they break down the crucial protocols enabling AI agents to interact and perform tasks: Model Context Protocol (MCP) and Agent-to-Agent (A2A).

    Discover what MCP and A2A are, why they're essential for unlocking AI's potential beyond simple chatbots, and how they allow AI to gain "hands and feet" to interact with systems like your desktop, browsers, or enterprise tools like Jira. The hosts explore practical use cases, the underlying technical architecture involving clients and servers, and the significant security implications, including remote execution risks, authentication challenges, and the need for robust authorization and privilege management.

    The discussion also covers Google's entry with the A2A protocol, comparing and contrasting it with Anthropic's MCP, and debating whether they are complementary or competing standards. Learn about the potential "AI-ification" of services, the likely emergence of MCP firewalls, and predictions for the future of AI interaction, such as AI DNS.

    If you're working with AI, managing cybersecurity in the age of AI, or simply curious about how AI agents communicate and the associated security considerations, this episode provides critical insights and context.


    Questions asked:

    (00:00) Introduction: AI Agents & Communication Protocols

    (02:06) What is MCP (Model Context Protocol)? Defining AI Agent Communication

    (05:54) MCP & Agentic Workflows: Enabling AI Actions & Use Cases

    (09:14) Why MCP Matters: Use Cases & The Need for AI Integration

    (14:27) MCP Security Risks: Remote Execution, Authentication & Vulnerabilities

    (19:01) Google's A2A vs Anthropic's MCP: Protocol Comparison & Debate

    (31:37) Future-Proofing Security: MCP & A2A Impact on Security Roadmaps

    (38:00) - MCP vs A2A: Predicting the Dominant AI Protocol

    (44:36) - The Future of AI Communication: MCP Firewalls, AI DNS & Beyond

    (47:45) - Real-World MCP/A2A: Adoption Hurdles & Practical Examples

    Show More Show Less
    54 mins