Your AI, Your Way cover art

Your AI, Your Way

Your AI, Your Way

By: MDCS.AI & CISCO
Listen for free

About this listen

In this podcast, we examine AI infrastructure from an enterprise perspective. Guests with backgrounds in enterprise IT, cloud architecture, security, finance, and education join MDCS.ai in the Cisco podcast studio to share practical experience and informed viewpoints.


Each episode addresses the questions that arise once AI initiatives move beyond experimentation and into production.


How do you design infrastructure that truly scales?
What happens to cost, performance, and control as AI workloads grow?
How do organizations balance speed, security, data sovereignty, and long-term ownership?


Rather than focusing on trends or product promotion, the discussions are grounded in real-world challenges—covering architectural choices, operating models, governance, accountability, and the trade-offs organizations must navigate when building or scaling AI environments.


Your AI, Your Way is intended for AI leaders and practitioners responsible for delivering AI in practice, not just in theory.

© 2026 MDCS.AI & CISCO
Economics Politics & Government
Episodes
  • Lorentz
    Feb 19 2026

    No power, no compute. That is the reality in the Netherlands today.

    When Nvidia assessed Europe, the message was clear: there is no sovereign AI infrastructure here. Scandinavian countries have electricity. The Netherlands does not. If nothing changes, the next generation of AI talent will have to leave the country to do serious work.

    Lorentz is the response. A regional AI initiative built by entrepreneurs, for entrepreneurs, without government funding or European program delays.

    The model exists already. In Sweden, the Wallenberg family funded Berzelius, and within years an ecosystem of talent, startups, and commercial success emerged around it. Lorentz applies the same concept to the Netherlands, starting with a single cluster focused on Digital Health.

    The goal is not just compute power. It is bringing together investors, universities, consultancies, and startups around shared infrastructure. A place where AI use cases move from pilot to revenue.

    In this 45-minute discussion recorded at the Cisco Studio in Amsterdam, Viktor Mirovic (Lorentz) and Ken van Ierlant (Mr Data / AI Leadership program) explain why the Dutch need to stop waiting and start building.

    Key topics include:

    • Why a year in AI time equals a century, and why large national programs will arrive too late.
    • How 80 to 90 percent of enterprise IT budgets disappear into legacy systems, leaving no room for innovation.
    • The difference between AI as a "shiny object" and AI as a transformation of operating models.
    • Why sovereignty matters when your strategic advantage depends on proprietary data and models.
    • How Lorentz plans to replicate its first cluster across multiple regions and domains.
    Show More Show Less
    45 mins
  • AI Infrastructure Costs
    Feb 19 2026

    Most AI pilots never reach production.

    The technology works. The use case makes sense. Then the cloud bill arrives. Costs spiral before anyone sees it coming.

    It starts small. A few GPUs in the cloud. Reasonable invoices. Then the project scales. Storage costs appear. Data transfer fees stack up. That monthly cloud bill? It can multiply by thirty before finance even flags it.

    Meanwhile, GPUs sit idle. Storage and network cannot keep up with compute. Organizations invest in processing power, then watch it wait for data that arrives too slowly. Utilization rates below thirty percent are common.

    Pilots get cancelled, budgets freeze, and AI ambitions stall across the organization.

    In this 34-minute discussion recorded at the Cisco Studio in Amsterdam, Guy D'Hauwer (Automation Group) and Sander ten Hoedt (Cisco) break down what actually drives AI infrastructure costs and when it makes sense to move from cloud to owned infrastructure.

    Key topics include:

    • Why "cost per token" should be the metric every AI team tracks, and why most do not.
    • How cloud flexibility turns into cloud lock-in through services that stack fees on fees.
    • The break-even point where owned infrastructure starts delivering more capacity for the same budget.
    • Why GPU underutilization is rarely a GPU problem, and what bottlenecks actually cause it.
    • How prefab modular datacenters cut deployment time from months to days.
    Show More Show Less
    35 mins
  • AI Center of Excellence
    Feb 19 2026

    Giving people AI tools is not the same as AI adoption.

    Most employees are driven by their inbox. Add a strategic AI project on top, and enthusiasm alone will not create capacity. Without structure, AI becomes a side project for one eager person while leadership has no visibility into the risks underneath.

    At TU Eindhoven, the Supercomputing Center grew its AI team from one engineer to five in eighteen months. Demand keeps rising. Researchers, educators, and now industry partners all want access to compute, but raw compute power is only half the story.

    Every platform is a race track. You need the right car for it. And someone who knows how to drive. When specialists work alongside researchers, efficiency gains of six times are common. Without that support, teams burn time learning what others already know.

    The question for any organization is not whether to build AI capability, but how. Centralized through a Center of Excellence? Distributed through a hub and spoke model? The answer depends on risk appetite, maturity, and speed.

    In this 39-minute discussion recorded at the Cisco Studio in Amsterdam, Nick Brummans (TU Eindhoven) and Vera Schut (NXT Minds) share what they have learned about building AI competencies that actually stick.

    Key topics include:

    • Why giving employees AI tools without structure leads to invisible risk and wasted effort.
    • The difference between a Center of Excellence, a hub and spoke model, and letting the business figure it out.
    • How TU Eindhoven onboards researchers onto advanced AI platforms, and what trips them up.
    • Why knowledge is a muscle that requires consistent training, not a one-time workshop.
    • What smaller companies can do faster than enterprises stuck on legacy systems.
    Show More Show Less
    40 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.