that BlueCoat Guy cover art

that BlueCoat Guy

that BlueCoat Guy

By: Sachin Menon
Listen for free

About this listen

The world is noisy. This podcast isn’t.

That Blue Coat Guy brings you unfiltered conversations on business, technology, AI, leadership, and the ideas shaping what’s coming next, straight from founders, operators, and thinkers who are actually in the game.

This podcast is for builders, professionals, and curious minds who want clarity over hype, depth over trends, and real insights you can think with, not just consume.

🎙️ New episodes drop twice a week.

2025 Sachin Menon
Economics Leadership Management & Leadership
Episodes
  • Will AI Replace Professionals in 10 Years? | Vincent Teyssier on the Next Era of Work |
    Mar 4 2026

    This Conversation Felt Different.

    Some episodes are about technology.
    Some are about careers.

    This one was about consequences.

    When I sat down with Vincent Teyssier, a technologist who started coding at 10, served in the Air Force, studied justice and ethics at Harvard, and now builds AI systems in wealth management - I knew this wouldn’t be a surface-level AI discussion.

    But I didn’t expect this level of clarity.

    We talked about:
    Why massive job displacement may happen sooner than we think
    Why most AI builders can’t truly think about consequences
    Why leadership is measured by delivery, not perception
    Why exposure matters more than curriculum in an AI-first world
    And why AI might soon become your teammate

    One line that stayed with me:

    “It’s not because you have a generative AI hammer that every problem is a nail.”

    This episode isn’t about hype.
    It’s about discipline, ethics, trade-offs, and how serious builders are actually thinking.

    Vincent doesn’t sugarcoat the future.
    He talks about job displacement, robotics, agent farms, and AI elitism — bluntly.

    But he also talks about opportunity.
    About surfing the wave of change instead of resisting it.

    If you're a:

    Student trying to stay relevant
    Founder building with AI
    Engineer transitioning into leadership
    Or professional wondering where this is headed

    This conversation will stretch how you think.

    ⏱️ Chapters

    00:40 – Coding at 10 & dropping out
    03:10 – What the Air Force teaches about limits
    05:50 – Why study justice & ethics at Harvard?
    09:30 – Do AI builders think about consequences?
    11:20 – Economic shock & AI displacement
    14:00 – The rise of generalist specialists
    17:30 – Exposure is greater than curriculum in AI
    20:00 – The hardest leadership lesson
    23:00 – AI in wealth management: hallucinations & risk
    27:50 – Guardrails, prompt injection & security
    31:20 – AI agents as team members
    33:20 – Rapid fire
    34:40 – Final reflections

    Course recommended by Vincent:

    🔗 Harvard justice course by Michael Sandel
    https://www.youtube.com/playlist?list=PL30C13C91CFFEFEA6

    About the Guest

    Vincent has worked across military systems, telco, fintech, NGOs, and private equity-backed environments.
    Today he operates at the intersection of AI, finance, and ethics — where mistakes are expensive and trust is non-negotiable.
    🔗 LinkedIn: https://www.linkedin.com/in/vincent-teyssier/
    🔗 BetterSG: https://better.sg/

    Connect with Me

    🎙 101 Talks with Sachin Menon
    🔗 Spotify: https://open.spotify.com/show/4zLUooXgeNmPqDn90i558R?si=daa1b1912f164f25
    🔗 Apple Podcasts: https://podcasts.apple.com/us/podcast/that-bluecoat-guy/id1874932215
    🔗 My LinkedIn: https://www.linkedin.com/in/sachin-menon-techsigma-technology/

    About 101 Talks

    101 Talks explores leadership, technology, and judgment.
    Because the future won’t just be shaped by intelligence.
    It will be shaped by responsibility.

    Show More Show Less
    35 mins
  • AI and Leadership: What Every Founder Must Know in 2026
    Feb 27 2026

    This episode is not about AI.
    It’s about something far more uncomfortable - leadership and responsibility in an age of intelligence we don’t fully control.

    In this conversation, I sit down with Dino Perone — a leader who has scaled billion-dollar revenue engines, spent over two decades inside AT&T, and is now building AI that understands human emotion.

    But this isn’t a conversation about tools, trends, or hype.

    It’s about:
    +What leadership looks like when machines influence human behavior
    +Why execution matters more than perfect strategy
    +The real risk of AI — not replacement, but over-dependence
    +And the one thing leaders must protect when everything else is changing

    We go deep into:
    +Military leadership → corporate scale → AI empathy
    +Building trust in a world filled with uncertainty
    +Why values are not optional in AI but they are the foundation

    One line that stayed with me:
    “There is no perfect plan. Execution is infinitely more important.”

    If you’re a founder, operator, student, or someone trying to make sense of where AI is taking us — this conversation will challenge how you think about leadership.

    🧠 ABOUT THE SHOW

    101 Talks with Sachin Menon explores the intersection of:
    Leadership
    Technology
    Human judgment

    Because the future isn’t just built on intelligence.
    It’s built on how we choose to use it.

    ⏱️ TIMESTAMPS

    00:00 – Why AI without responsibility is dangerous
    01:00 – Leadership lessons from the military
    04:00 – Scale, patience & execution inside AT&T
    06:30 – Why AI + empathy is the next frontier
    10:40 – What AI is really changing in sales leadership
    12:30 – The line between innovation & responsibility
    14:10 – The biggest risk of emotional AI
    16:30 – Missing values in today’s AI systems
    18:10 – What defines a leader under pressure
    19:30 – Skills that matter in an AI-first world
    20:55 – Rapid fire
    22:10 – What leaders must protect in the future

    Connect with Dino Perone:
    https://www.linkedin.com/dinoperone-cro/

    If this conversation made you pause, question, or rethink how you see AI — that means we did our job.

    AI will keep moving fast.
    The real question is whether our thinking can keep up.

    Let me know your biggest takeaway in the comments.

    Connect with me here:
    LinkedIn: https://www.linkedin.com/sachin-menon-techsigma-technology

    Subscribe for more conversations with global leaders shaping the future of AI and business.

    Show More Show Less
    23 mins
  • LLMs Are Not the Beginning of AI. Here’s the Truth.| Dr. Sam Li on the Real AI Journey
    Feb 20 2026

    My guest today is Dr. Sam Lee, Global AI Leader, Board Advisor, and someone who was building AI long before it became the loudest word in business.

    AI didn’t begin with LLMs. It didn’t start with ChatGPT. And it definitely didn’t start with hype.

    In this conversation, we go beyond tools and trends. We talk about responsibility, leadership, adaptability, and what it really takes to build meaningful AI systems.

    In this episode, we discuss:

    • Why LLMs are not the beginning of AI
    • The evolution from traditional ML to modern agentic systems
    • The difference between a Chief AI Officer and Head of ML
    • Why AI should make life easier, not replace your thinking
    • What students should actually be learning in the AI era

    One thing that stood out to me personally was this — the hardest AI question isn’t “Can we build it?” It’s “Should we build it?”

    Dr. Sam also shares insights from working across enterprises, consulting, academia, and boardrooms — giving a rare perspective that connects code, classrooms, and C-suites.

    If you are a student, an AI professional, a founder, or someone trying to understand where this AI wave is heading — this conversation will help you zoom out and think clearly.

    Because AI will keep moving fast.

    The real question is whether our thinking can keep up.

    📘 Book Recommendation from Dr. Sam:
    The First 90 Days by Michael Watkins
    https://www.amazon.com/First-90-Days-Strategies-Expanded/dp/1422188612

    🔎 References Mentioned in the Episode:

    • Multi-Agent Systems: https://en.wikipedia.org/wiki/Multi-agent_system
    • ELIZA (early AI chatbot): https://en.wikipedia.org/wiki/ELIZA
    • Large Language Models (LLMs): https://en.wikipedia.org/wiki/Large_language_model
    • Responsible AI & Governance (OECD AI Principles) : https://oecd.ai/en/ai-principles

    Timestamps
    00:00 The Journey of AI: From Labs to Boardrooms
    03:05 Early Career and the Transition to AI
    06:05 Understanding Multi-Agent Systems
    08:34 AI in the Business Context
    10:51 Shaping Decisions in AI Strategy
    13:05 Misconceptions in Boardrooms about AI
    15:47 The Role of Chief AI Officer vs. Head of ML
    19:46 Skills for the Future: What Students Should Learn
    25:11 The Importance of Adaptability in AI Development
    26:22 Understanding the Evolution of AI Models
    28:48 India's Position in AI Leadership
    31:31 Rapid Fire Insights on AI and Personal Preferences
    35:22 The Role of AI Governance in Innovation

    Connect with Dr. Sam Li:
    https://www.linkedin.com/in/dr-sam-li-weixian-844033ba

    If this conversation made you pause, question, or rethink how you see AI — that means we did our job.

    AI will keep moving fast.
    The real question is whether our thinking can keep up.

    Let me know your biggest takeaway in the comments.

    Connect with me here:
    LinkedIn: https://www.linkedin.com/in/sachin-menon-techsigma-technology/

    Subscribe for more conversations with global leaders shaping the future of AI and business.

    Show More Show Less
    39 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.