Five Seconds to Fraud: Detecting AI Deepfakes Before They Strike with Ben Colman
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
Summary
Inside the AI Deepfake Threat
What if the voice confirming your wire transfer wasn't actually your client? Ben Colman, founder and CEO of Reality Defender, joins host John Richards to unpack one of the fastest-growing attack surfaces in cybersecurity: AI-generated deepfakes. Once the exclusive domain of Hollywood studios and nation-state actors, real-time voice and video impersonation is now accessible to anyone with a laptop—and fraudsters are scaling up fast.
From Specialized Hardware to Your Home Computer
Ben traces the evolution from the specialized machinery required six years ago to today's world where anyone can clone a voice with less than five seconds of audio—locally, for free, using open-source models. He walks through the modern fraud landscape, from grandparent scams and bank account takeovers to an eye-opening story about fake job applicants that will make any recruiting team rethink its screening process.
Reality Defender's approach is built for how organizations actually work—plugging directly into call centers, video conferencing platforms, and identity verification tools through a simple API, rather than asking teams to adopt yet another standalone product. Their probabilistic detection models scan in real time across thousands of indicators, all without storing or comparing against any biometric data.
John and Ben also get into the emerging frontier of agentic AI—what happens when you need to authenticate an AI voice agent rather than a human—and how smart permission gates can define exactly what those agents are and aren't allowed to do.
Questions We Answer in This Episode
- How has the barrier to creating convincing deepfakes changed in the last six years?
- What are the most common deepfake fraud vectors hitting businesses and consumers right now?
- How does Reality Defender detect AI-generated media without storing any biometric data?
- What does deepfake defense look like as agentic AI becomes mainstream?
Key Takeaways
- Voice cloning now requires less than five seconds of audio and runs locally on consumer hardware
- Deepfake fraud spans a wide range—from grandparent scams to fake job applicants to wire transfer hijacking
- Real-time detection can plug directly into tools organizations already use, with no new workflow required
- Agentic AI is creating a new category of identity challenge—and the defenses are already being built
The deepfake threat isn't coming—it's already here, hitting call centers, recruiting pipelines, and financial institutions every day. Whether you're a developer looking to integrate detection into your stack or a security leader trying to get ahead of the next wave, this conversation is a essential listen.
Resources
- Reality Defender
- Ben Colman
- Reality Defender on LinkedIn
- Follow Reality Defender on X
- CyberProof
- Learn more about Paladin Cloud
- Got a question? Ask us here!
- (00:04) - Welcome to Cyber Sentries
- (00:35) - Meet Ben Colman, Reality Defender
- (01:23) - Ben’s Beginnings
- (02:36) - Changing Landscape
- (03:57) - What It Looks Like Today
- (05:07) - Differences
- (06:16) - Main Ways Fraud’s Committed
- (09:21) - Way to Tackle It
- (11:07) - Distinguishing the AI
- (13:14) - Response Time
- (14:09) - Recommended Next Steps
- (15:55) - Where It’s Heading
- (19:21) - How to Use as Organization
- (20:52) - Developer Community
- (22:23) - Audio and Video
- (23:34) - Risk Assessment
- (24:41) - Prevalence
- (26:09) - Wrap Up