
Hack the Algo w Swapneel Mehta
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
with Dr. Swapneel Mehta
We know the algorithm is messing with us. But what if we could mess with it back?
In this episode, we’re joined by machine learning scientist and digital trust builder Dr. Swapneel Mehta, who’s worked at Twitter, Adobe, Slack, CERN, and MIT—and now leads SIMPPL, a nonprofit restoring digital trust across seven countries.
We explore the hidden levers behind the feeds we scroll, the platforms we feed, and the narratives feeding on us. YUMMMMY. From fake-news detection tools and Russia’s meme warfare to hacking LinkedIn to actually show you what you care about (yes, LINKEDIN CAN BE A PLACE YOU ENJOY), Swapneel walks us through the logic, power, and danger of modern algorithms—and what you can do to fight back.
Key Topics
-
Why the most dangerous players aren’t Big Tech … and who to watch for?
-
How recommender systems actually work (and how to reverse-engineer your feed)
-
The chilling limits of current AI guardrails—and why we need new ones
-
Why trust and truth require human systems, not just code
-
The surprising algorithm hack that got Swapneel cheap Uber Eats for years
Notable Quotes
-
“It’s not free speech that’s the problem—it’s free reach.”
-
“You can’t prove a harm that didn’t happen—but that’s what safety teams are for.”
-
“Everything you do online is a signal. The trick is learning what it’s signaling.”
-
“When everything you see is hyper-personalized, the algorithm stops exploring—and starts exploiting.”
-
“Trust isn’t just about information. It’s about context.”
What’s Inside
[00:01:09] – Intro to Swapneel Mehta and why he builds for digital trust
[00:02:00] – Why tiny unknown companies are more dangerous than tech giants
[00:04:01] – IRBs, unethical experiments, and what industry gets away with
[00:06:40] – Why we regulate medicine but not memes
[00:08:20] – Trust & Safety layoffs and the problem with proving negatives
[00:12:40] – Why platforms aren’t legally liable for the content they amplify
[00:15:00] – Algorithms promote anger, not accuracy—and that’s by design
[00:19:00] – Can we hack the algorithm? Swapneel says: yes
[00:22:00] – Filter bubbles, personalization traps, and digital exploitation
[00:28:00] – Can we ever get truly “unbiased” content?
[00:30:00] – How AI can help (and sometimes out-label humans)
[00:33:00] – Real-time bot network detection during the Ukraine war
[00:39:00] – How coordinated harassment campaigns can be uncovered—and stopped
[00:44:00] – A step-by-step guide to hacking your LinkedIn feed (it’s a signal system)
[00:53:00] – Uber Eats, conversion flow, and how Swapneel got years of discounts
[00:58:00] – Using AI + humans to reduce maternal mortality in India
[01:01:00] – Teaching undergrads to build viable AI products that serve the public
[01:02:43] – Final words: Free speech, not free reach
MAD ANGEL SPONSOR: Randori Resources
This episode is brought to you by Randori Resources, founded by our guest Sam Carus. Randori helps you train your mind like a fighter—without the mat burns.
👉 Visit randori-resources.com to learn more.
🎶 Plus: Hear our original MAD tribute for Randori inside the episode, made by the great Elameen
WANT TO SUPPORT MAD?
Sponsor us. We’ll make you a weird, wonderful custom video.
Email madwarfarepodcast@gmail.com
(Or just send snacks. That works too.)
—
MAD Warfare™️ is hosted by narrative strategist Jocelyn Brady and cognitive neuroscientist Sean Anthony Guillory. Edited and produced by Amine el Filali. Visit our website at madwarfare.com for extra giggles. And send your wishes, weird ideas, dream guests, and (yes it bears repeating) sponsorship inquiries to madwarfarepodcast@gmail.com.
—
FAIR USE: This show is MAD enough to include homages, short clips, and references that provide vital context and/or moments of joy. We DEEPLY respect every creator’s work and use these moments purely for educational, artistic, and transformative purposes under Fair Use. If we are missing attributions or you would like to collaborate, please reach out—we’re always happy to chat!