• People-Pleasers: Why AI Agents Go Rogue and How to Govern Them at Scale with Shreyans Mehta
    May 6 2026

    Agent Gone Rogue: How to Build Behavioral Guardrails for Agentic AI in the Enterprise with Shreyans Mehta

    Host John Richards welcomes back Shreyans Mehta, CTO and co-founder of Cequence, for a return visit that couldn't be more timely. Two years ago, they were talking about securing AI at the application layer. Now enterprises are running thousands of autonomous agents around the clock, and the security perimeter has fundamentally changed. In this episode, John and Shreyans dig into the new class of risk that comes with agentic AI—and what it actually takes to govern it.

    When Your AI Agent Deletes the System to Delete the Email

    Shreyans opens with a concept that reframes the whole conversation: AI agents aren't just a productivity tool—they're autonomous actors with access to your most sensitive systems. The problem isn't that they'll go rogue on purpose. It's that they're people-pleasers. They will exhaust every available path to complete a task, which means broad access will get used in ways you never anticipated.

    He shares two stories that land hard. First, a research case study called Agents of Chaos, where an agent tasked with deleting a saved password—lacking email-delete permissions—resolved the problem by deleting the system instead. Second, a real customer scenario where a Claude Code-based agent spent an entire weekend trying to upgrade a legacy codebase and, when it couldn't fetch a file due to a missing SHA value, started guessing characters one by one—for hours.

    The fix isn't just identity and access management—it's a new layer Shreyans calls agent behavioral analytics. Start with a plain-English job description. Cequence translates that into deterministic rules: what the agent can access, what it can send, what it can never do. Every interaction is monitored against that job description in real time—not just logged, but enforced. When the email assistant starts forwarding sensitive data to an unknown address, it gets stopped, not flagged.

    Questions We Answer in This Episode

    • Why is identity management alone not enough to secure AI agents?
    • What is the token flattening problem, and why does it matter for enterprise security?
    • How do you translate a plain-English agent job description into deterministic access controls?
    • What does agent behavioral analytics look like in practice—and who owns it inside an organization?

    Key Takeaways

    • AI agents are already in your environment—the only question is whether you're governing them.
    • Every agent needs a job description that converts into deterministic rules, not just an identity token.
    • Monitoring must be tied to behavior, not just access logs—and it has to stop bad actions, not just detect them.
    • Agent sprawl demands a new security category built for non-human, 24/7 actors.

    If your organization is running agentic AI and nobody owns the behavioral layer yet, this episode is a good place to start. The enterprises getting it right aren't waiting for security teams to green-light every agent—they're using tools that translate intent into guardrails automatically. Give it a listen, then check out the resources below.

    Resources

    • Shreyans Mehta, Cequence: LinkedIn
    • Cequence AI Gateway
    • Cequence on LinkedIn
    • CyberProof
    • Learn more about Paladin Cloud
    • Got a question? Ask us here!
    • (00:00) - Welcome to Cyber Sentries
    • (01:08) - Shreyans Mehta
    • (01:57) - Changes Since His First Visit
    • (04:03) - Finding Ways to Feel More Comfortable
    • (11:24) - Getting a Handle on It
    • (16:11) - Access and Profiles
    • (21:55) - Transitioning to Rules
    • (24:24) - How Teams Use This
    • (26:49) - Playing Out in the Real World
    • (27:49) - Learning More
    • (29:07) - Wrap Up
    Show More Show Less
    31 mins
  • Five Seconds to Fraud: Detecting AI Deepfakes Before They Strike with Ben Colman
    Apr 1 2026

    Inside the AI Deepfake Threat

    What if the voice confirming your wire transfer wasn't actually your client? Ben Colman, founder and CEO of Reality Defender, joins host John Richards to unpack one of the fastest-growing attack surfaces in cybersecurity: AI-generated deepfakes. Once the exclusive domain of Hollywood studios and nation-state actors, real-time voice and video impersonation is now accessible to anyone with a laptop—and fraudsters are scaling up fast.

    From Specialized Hardware to Your Home Computer

    Ben traces the evolution from the specialized machinery required six years ago to today's world where anyone can clone a voice with less than five seconds of audio—locally, for free, using open-source models. He walks through the modern fraud landscape, from grandparent scams and bank account takeovers to an eye-opening story about fake job applicants that will make any recruiting team rethink its screening process.

    Reality Defender's approach is built for how organizations actually work—plugging directly into call centers, video conferencing platforms, and identity verification tools through a simple API, rather than asking teams to adopt yet another standalone product. Their probabilistic detection models scan in real time across thousands of indicators, all without storing or comparing against any biometric data.

    John and Ben also get into the emerging frontier of agentic AI—what happens when you need to authenticate an AI voice agent rather than a human—and how smart permission gates can define exactly what those agents are and aren't allowed to do.

    Questions We Answer in This Episode

    • How has the barrier to creating convincing deepfakes changed in the last six years?
    • What are the most common deepfake fraud vectors hitting businesses and consumers right now?
    • How does Reality Defender detect AI-generated media without storing any biometric data?
    • What does deepfake defense look like as agentic AI becomes mainstream?

    Key Takeaways

    • Voice cloning now requires less than five seconds of audio and runs locally on consumer hardware
    • Deepfake fraud spans a wide range—from grandparent scams to fake job applicants to wire transfer hijacking
    • Real-time detection can plug directly into tools organizations already use, with no new workflow required
    • Agentic AI is creating a new category of identity challenge—and the defenses are already being built

    The deepfake threat isn't coming—it's already here, hitting call centers, recruiting pipelines, and financial institutions every day. Whether you're a developer looking to integrate detection into your stack or a security leader trying to get ahead of the next wave, this conversation is a essential listen.

    Resources

    • Reality Defender
    • Ben Colman
    • Reality Defender on LinkedIn
    • Follow Reality Defender on X
    • CyberProof
    • Learn more about Paladin Cloud
    • Got a question? Ask us here!
    • (00:04) - Welcome to Cyber Sentries
    • (00:35) - Meet Ben Colman, Reality Defender
    • (01:23) - Ben’s Beginnings
    • (02:36) - Changing Landscape
    • (03:57) - What It Looks Like Today
    • (05:07) - Differences
    • (06:16) - Main Ways Fraud’s Committed
    • (09:21) - Way to Tackle It
    • (11:07) - Distinguishing the AI
    • (13:14) - Response Time
    • (14:09) - Recommended Next Steps
    • (15:55) - Where It’s Heading
    • (19:21) - How to Use as Organization
    • (20:52) - Developer Community
    • (22:23) - Audio and Video
    • (23:34) - Risk Assessment
    • (24:41) - Prevalence
    • (26:09) - Wrap Up
    Show More Show Less
    29 mins
  • Built Fast, Broken Faster: MCP & AI App Security—with GitGuardian’s Gaetan Ferry
    Mar 4 2026

    When “Ship Fast” Meets “Secure by Design” in AI Apps

    AI-driven development is moving at breakneck speed—and attackers are taking advantage of the shortcuts. In this episode of Cyber Sentries: AI Insights for Cloud Security, host John Richards sits down with Gaetan Ferry, security researcher at GitGuardian, to unpack how modern AI tooling, MCP servers, and cloud platforms are reshaping the security landscape. The core problem: the same agentic workflows that boost productivity can also multiply identities, credentials, and blast radius if something goes wrong.

    After John and Gaetan set the stage, Gaetan walks through a real-world-style vulnerability chain involving smithery.ai, an MCP server registry/hosting platform. It’s a practical look at how “classic” web issues can still show up in brand-new AI ecosystems—and how one small weakness can cascade into bigger supply chain risk. Along the way, they explore why secret sprawl is accelerating, what attackers are hunting for, and why observability is becoming as essential for identities and tokens as it is for infrastructure.

    Why MCP Servers, OAuth, and Secret Sprawl Are Colliding

    A big theme is the tension between usability and security: teams want agents that can “do everything,” which often means broad permissions and long-lived credentials. Gaetan explains why adopting OAuth is directionally better than static API keys, but still not a silver bullet in a world where agents need delegated access and tokens inevitably “live somewhere.” John pushes on what builders can do now—especially when new frameworks (and new hype cycles) keep resetting hard-won security practices.

    The conversation lands on pragmatic guidance: reduce blast radius where you can, inventory identities and secrets, and invest in observability so you can respond fast when—not if—credentials leak. Note: This episode discusses breach scenarios and exploitation chains—be thoughtful about sharing internal security details and incident response specifics.

    Questions We Answer in This Episode

    • How can a simple web flaw turn into an AI supply chain attack through MCP server hosting?
    • Why doesn’t OAuth automatically “solve” agent security and credential risk?
    • What does “limiting blast radius” look like when agents need broad permissions to be useful?
    • How can observability help you detect and respond to secrets sprawl across AI tools?

    Key Takeaways

    • Treat MCP servers and agent integrations like critical supply chain dependencies—because they are.
    • Prefer short-lived, scoped credentials (OAuth when possible), but plan for token theft scenarios anyway.
    • Reduce blast radius with least privilege, separation of duties, and segmented agent access.
    • Build identity and secret observability so you can triage and remediate leaks quickly.

    The Bottom Line for AI Security Teams in 2026

    If you’re experimenting with MCP servers or rolling out agentic workflows, this episode is a timely reminder that fundamentals still win. John and Gaetan make the case that “moving fast” doesn’t have to mean accepting unlimited credential risk—you can ship quickly while still tightening scopes, tracking identities, and watching where secrets spread. Tune in for the real-world examples and the practical mindset shift that helps teams stay productive without becoming the next supply chain headline.

    Links & Notes

    • GitGuardian
    • Connect with Gaetan on LinkedIn
    • State of Secrets Sprawl Report 2025
    • State of Secrets Sprawl Report 2026 (coming later in March!)
    • CyberProof
    • Learn more about Paladin Cloud
    • Got a question? Ask us here!
    • (00:04) - Welcome to Cyber Sentries
    • (01:07) - Meet Gaetan Ferry
    • (02:19) - Attacks
    • (03:17) - Vulnerabilities
    • (07:38) - One-Off or Widespread?
    • (10:20) - Recommendations to Avoid
    • (14:19) - Exploiting
    • (16:50) - Resolving
    • (23:13) - Path Forward
    • (30:53) - Impact
    • (34:48) - Year of Supply Chain Attacks
    • (35:51) - Wrap Up
    Show More Show Less
    39 mins
  • Identity in the AI Era: Managing Enterprise Risk in the Age of AI with Jasson Casey
    Feb 4 2026

    The Evolution of Identity Security in the Age of AI

    In this episode of Cyber Sentries, John Richards sits down with Jasson Casey, CEO and co-founder of Beyond Identity, to explore the intersection of identity security, AI, and enterprise risk management. As organizations rapidly adopt AI tools and agents, the fundamental challenges of identity security are evolving—requiring both new approaches and a return to core principles.

    Identity: The Foundation of Modern Security

    Jasson explains how identity has become the root cause of most security incidents, with identity-based failures accounting for 80% of security tickets. The conversation explores how AI is transforming every role in modern organizations, while highlighting the security implications of this rapid adoption.

    Key Takeaways:

    • Identity security is fundamental to managing AI risk in enterprises
    • Traditional security concepts still apply but require new implementation approaches
    • Organizations need to track data flow and permissions across AI systems

    Looking Ahead

    As AI adoption accelerates, organizations must balance innovation with security. Through proper identity management and understanding of data flow, enterprises can prevent most security incidents while embracing the transformative potential of AI technologies.

    Links & Notes

    • Beyond Identity
    • AI Solutions
    • Connect with Jasson Casey on LinkedIn
    • Connect with Jasson Casey on X
    • CyberProof
    • Learn more about Paladin Cloud
    • Got a question? Ask us here!
    • (00:04) - Welcome to Cyber Sentries
    • (01:26) - Meet Jasson Casey
    • (03:15) - Regrets?
    • (08:43) - Friction Point
    • (10:52) - Identity
    • (17:32) - Adoption
    • (22:41) - The Hallmark of Network Security
    • (28:34) - Paint Analogy
    • (31:41) - Threats
    • (34:32) - Visualization Tool
    • (35:37) - Their Work in This Space
    • (37:29) - Learning More
    • (38:00) - Wrap Up
    Show More Show Less
    39 mins
  • Security Data Pipelines: How to Cut SIEM Costs and Noise with Dina Kamal
    Jan 14 2026

    SIEM Speed Without the Sprawl—DataBahn’s Take on Security Data Pipelines

    In this Cyber Sentries: AI Insights for Cloud Security episode, host John Richards sits down with Dina Kamal, Chief Revenue Officer at DataBahn, to tackle a familiar cloud security problem: teams can’t get the right data into the SIEM fast enough, and when they do, costs and noise spike. After the introductions, John and Dina dig into why data integration and parsing often consume most of the timeline in SIEM projects—and how a security data pipeline layer can compress onboarding from months to weeks.

    They also explore what “doing more with less” looks like in a modern SOC: filtering and routing data based on detection value, preserving what’s needed for compliance, and keeping flexibility for SIEM migrations. Dina’s bigger point is that AI only becomes truly useful when it’s paired with domain expertise and real operational context—otherwise it’s easy to end up with impressive-looking outputs that don’t hold up under investigation pressure.

    Questions We Answer in This Episode

    • Why do SIEM projects stall on data onboarding, and what speeds it up?
    • How can you cut SIEM ingestion costs without weakening detections?
    • What does owning your security data change during SIEM migrations?
    • Where does AI help most in SOC workflows, and where do guardrails matter?

    Key Takeaways

    • Data pipelines remove SIEM “plumbing” bottlenecks by automating collection, parsing, and transformation.
    • Cost reduction works best when you filter by security value, not just by volume.
    • Decoupling data collection from the SIEM reduces lock-in and simplifies vendor changes.
    • AI is strongest when guided by security context and experienced practitioners.

    The throughline is practical: better detections and faster investigations start upstream with intentional data handling. By treating the SIEM as a high-value analytics destination instead of a dumping ground, teams can regain capacity, reduce noise, and keep options open as tools and vendors change. And when AI is applied to the right parts of the workflow—with clear constraints and real-world context—it can accelerate outcomes without compromising trust.

    Links & Notes

    • DataBahn
    • Connect with Dina Kamal on LinkedIn
    • Learn more about Cyberproof
    • Got a question? Ask us here!
    • (00:04) - Welcome to Cyber Sentries
    • (01:26) - Meet Dina Kamal
    • (03:38) - Data Pipeline Management
    • (06:18) - The Target
    • (07:56) - Changing Vendors
    • (08:57) - No Storage
    • (09:54) - Why People Need It
    • (13:33) - Ahead of the Curve
    • (20:18) - Capturing the Data
    • (23:25) - Useful Data
    • (26:26) - More with Less
    • (27:27) - Visibility
    • (30:03) - When to Start
    • (31:28) - Wrap Up
    Show More Show Less
    33 mins
  • Securing AI Agents: How to Stop Credential Leaks and Protect Non‑Human Identities with Idan Gour
    Dec 10 2025

    Bridging the AI Security Gap—Inside the Rise of Non‑Human Identities

    In this episode of Cyber Sentries from CyberProof, host John Richards sits down with Idan Gour, co-founder and president of Astrix Security, to unpack one of today’s fastest-emerging challenges: securing AI agents and non-human identities (NHIs) in the modern enterprise. As companies rush to adopt generative-AI tools and deploy Model Context Protocol (MCP) servers, they’re unlocking incredible automation—and a brand-new attack surface. Together, John and Idan explore how credential leakage, hard-coded secrets, and rapid “shadow-AI” experimentation are exposing organizations to unseen risks, and what leaders can do to stay ahead.

    From Non‑Human Chaos to Secure‑by‑Design AI

    Idan shares the origin story of Astrix Security—built to close the identity-security gap left behind by traditional IAM tools. He explains how enterprises can safely navigate their AI journey using the Discover → Secure → Deploy framework for managing non-human access. The conversation moves from early automation risk to today’s complex landscape of MCP deployments, secret-management pitfalls, and just-in-time credentialing. John and Idan also discuss Astrix’s open-source MCP wrapper, designed to prevent hard‑coded credentials from leaking during model integration—a practical step organizations can adopt immediately.

    Questions We Answer in This Episode

    • How can companies prevent AI‑agent credentials from leaking across cloud and development environments?
    • What’s driving the explosion of non‑human identities—and how can security teams regain control?
    • When should organizations begin securing AI agents in their adoption cycle?
    • What frameworks or first principles best guide safe AI‑agent deployment?

    Key Takeaways

    • Start securing AI agents early—waiting until “maturity” means you’re already behind.
    • Visibility is everything: you can’t protect what you don’t know exists.
    • Automate secret management and avoid static credentials through just‑in‑time access.
    • Treat AI agents and NHIs as first‑class citizens in your identity‑security program.

    As AI adoption accelerates within every department—from R&D to customer operations—Idan emphasizes that non‑human identity management is the new frontier of cybersecurity. Getting that balance right means enterprises can innovate fearlessly while maintaining the integrity of their data, systems, and brand.

    Links & Notes

    • Learn more about Paladin Cloud
    • Learn more about Astrix Security
    • Open Source MCP Secret Wrapper
    • Idan Gour on LinkedIn
    • Got a question? Ask us here!
    • (00:04) - Welcome to Cyber Sentries
    • (01:45) - Meet Idan Gour
    • (04:00) - As the Vertical Started to Grow
    • (07:01) - The Journey
    • (09:48) - Struggling
    • (13:42) - Risk
    • (16:39) - Targeting
    • (18:18) - Framework
    • (20:41) - Implementing Early
    • (22:16) - Back End Risks
    • (24:28) - Bridging the Gap
    • (26:36) - When to Engage Astrix
    • (30:18) - Wrap Up
    Show More Show Less
    33 mins
  • AI Compliance Security: How Modular Systems Transform Enterprise Risk Management with Richa Kaul
    Nov 12 2025

    AI-Powered Compliance: Transforming Enterprise Security

    In this episode of Cyber Sentries, John Richards speaks with Richa Kaul, CEO and founder of Complyance. Richa shares insights on using modular AI systems for enterprise security compliance and discusses the critical balance between automation and human oversight in cybersecurity.

    Why Enterprise Security Compliance Matters Now

    The conversation explores how enterprises struggle with increasing cyber threats and complex third-party vendor networks. Richa explains how moving from reactive to proactive compliance monitoring can transform security posture, sharing real examples from Fortune 100 companies and major sports organizations.

    AI Implementation That Prioritizes Security

    Richa details their approach to implementing AI in compliance, emphasizing their commitment to data privacy and security. The company uses a modular AI infrastructure with opt-in features and minimal data access principles, demonstrating how AI can enhance security without compromising privacy.

    Questions We Answer:

    • How can enterprises shift from reactive to proactive compliance monitoring?
    • What are the key considerations for implementing AI in security compliance?
    • How should companies manage third-party vendor risks in the AI era?
    • What role does employee education play in maintaining security compliance?

    Key Takeaways:

    • Continuous monitoring beats point-in-time compliance checks
    • Modular AI systems offer better security control than all-in-one solutions
    • Third-party vendor risk requires automated, continuous assessment
    • Human elements like training and culture can't be fully automated

    Looking Ahead: Security Challenges

    The discussion concludes with insights into future challenges, including quantum computing's impact on security and the growing complexity of AI-related risks. Richa emphasizes the importance of building nimble, configurable systems to address emerging threats.

    Links & Notes

    • More About Richa Kaul
    • Complyance on LinkedIn and the Web
    • Learn more about Paladin Cloud
    • Learn more about Cyberproof
    • Got a question? Ask us here!
    • (00:04) - Welcome to Cyber Sentries
    • (01:37) - Meet Richa Kaul from Complyance
    • (02:56) - Areas Needing Security
    • (04:43) - Reactive vs. Proactive
    • (06:40) - Integrating AI
    • (08:23) - AI Compliance Challenges
    • (11:11) - Training Their Models
    • (12:39) - Evaluating Third Parties
    • (16:13) - The Team
    • (19:27) - Looking to the Future
    • (21:08) - How Others Are Implementing AI
    • (24:28) - Creating Capacity
    • (26:08) - Companies Doing It Well
    • (27:49) - When They Don’t Have the Resources
    • (29:14) - Wrap Up
    Show More Show Less
    31 mins
  • AI Governance Essentials: Navigating Security and Compliance in Enterprise AI with Walter Haydock
    Oct 8 2025

    AI Governance in an Era of Rapid Change

    In this episode of Cyber Sentries, John Richards talks with Walter Haydock, founder of StackAware, about navigating the complex landscape of AI governance and security. Walter brings unique insights from his background as a Marine Corps intelligence officer and his extensive experience in both government and private sectors.

    Understanding AI Risk Management

    Walter shares his perspective on how organizations can develop practical AI governance frameworks while balancing innovation with security. He outlines a three-step approach starting with policy development, followed by thorough inventory of AI tools, and assessment of cybersecurity implications.

    The discussion explores how different industries face varying levels of AI risk, with healthcare emerging as a particularly challenging sector where both opportunities and dangers are amplified. Walter emphasizes the importance of aligning AI governance with business objectives rather than treating it as a standalone initiative.

    Questions We Answer in This Episode:

    • How should organizations approach AI governance and risk management?
    • What are the key challenges in implementing ISO 42001 for AI systems?
    • How can companies address the growing problem of "shadow AI"?
    • What are the implications of fragmented AI regulations across different jurisdictions?

    Key Takeaways:

    • Organizations need clear AI policies that define acceptable use boundaries
    • Risk management should integrate with existing frameworks rather than create separate systems
    • Companies must balance compliance requirements with innovation needs
    • Employee education and flexible approval processes help prevent shadow AI usage

    The Regulatory Landscape

    The conversation delves into emerging AI regulations, from New York City's local laws to Colorado's comprehensive AI Act. Walter provides valuable insights into how organizations can prepare for upcoming regulatory changes while maintaining operational efficiency.

    Links & Notes

    • StackAware
    • Connect with Walter on LinkedIn
    • Learn more about Paladin Cloud
    • Got a question? Ask us here!
    • (00:04) - Welcome to Cyber Sentries
    • (00:53) - Walter Haydock from Stackaware
    • (01:37) - Walter’s Background
    • (02:59) - Areas Needing Improvement
    • (03:47) - Integrating AI
    • (04:57) - Stackaware’s Role
    • (06:49) - AI Certification Standard
    • (07:41) - Implementation Challenges
    • (08:52) - Thoughts on Looser Protocols
    • (11:39) - Regulations
    • (13:24) - Approaches
    • (15:20) - Areas of Concern
    • (17:50) - Handling Risk
    • (19:01) - Who Should Own AI Governance
    • (20:07) - Pushback?
    • (21:39) - Proper Techniques
    • (22:50) - What Levels
    • (24:13) - Smaller Companies
    • (26:17) - Ideal Legislation
    • (29:12) - Plugging Walter
    • (30:00) - Wrap Up
    Show More Show Less
    31 mins