Episodes

  • Lorentz
    Feb 19 2026

    No power, no compute. That is the reality in the Netherlands today.

    When Nvidia assessed Europe, the message was clear: there is no sovereign AI infrastructure here. Scandinavian countries have electricity. The Netherlands does not. If nothing changes, the next generation of AI talent will have to leave the country to do serious work.

    Lorentz is the response. A regional AI initiative built by entrepreneurs, for entrepreneurs, without government funding or European program delays.

    The model exists already. In Sweden, the Wallenberg family funded Berzelius, and within years an ecosystem of talent, startups, and commercial success emerged around it. Lorentz applies the same concept to the Netherlands, starting with a single cluster focused on Digital Health.

    The goal is not just compute power. It is bringing together investors, universities, consultancies, and startups around shared infrastructure. A place where AI use cases move from pilot to revenue.

    In this 45-minute discussion recorded at the Cisco Studio in Amsterdam, Viktor Mirovic (Lorentz) and Ken van Ierlant (Mr Data / AI Leadership program) explain why the Dutch need to stop waiting and start building.

    Key topics include:

    • Why a year in AI time equals a century, and why large national programs will arrive too late.
    • How 80 to 90 percent of enterprise IT budgets disappear into legacy systems, leaving no room for innovation.
    • The difference between AI as a "shiny object" and AI as a transformation of operating models.
    • Why sovereignty matters when your strategic advantage depends on proprietary data and models.
    • How Lorentz plans to replicate its first cluster across multiple regions and domains.
    Show More Show Less
    45 mins
  • AI Infrastructure Costs
    Feb 19 2026

    Most AI pilots never reach production.

    The technology works. The use case makes sense. Then the cloud bill arrives. Costs spiral before anyone sees it coming.

    It starts small. A few GPUs in the cloud. Reasonable invoices. Then the project scales. Storage costs appear. Data transfer fees stack up. That monthly cloud bill? It can multiply by thirty before finance even flags it.

    Meanwhile, GPUs sit idle. Storage and network cannot keep up with compute. Organizations invest in processing power, then watch it wait for data that arrives too slowly. Utilization rates below thirty percent are common.

    Pilots get cancelled, budgets freeze, and AI ambitions stall across the organization.

    In this 34-minute discussion recorded at the Cisco Studio in Amsterdam, Guy D'Hauwer (Automation Group) and Sander ten Hoedt (Cisco) break down what actually drives AI infrastructure costs and when it makes sense to move from cloud to owned infrastructure.

    Key topics include:

    • Why "cost per token" should be the metric every AI team tracks, and why most do not.
    • How cloud flexibility turns into cloud lock-in through services that stack fees on fees.
    • The break-even point where owned infrastructure starts delivering more capacity for the same budget.
    • Why GPU underutilization is rarely a GPU problem, and what bottlenecks actually cause it.
    • How prefab modular datacenters cut deployment time from months to days.
    Show More Show Less
    35 mins
  • AI Center of Excellence
    Feb 19 2026

    Giving people AI tools is not the same as AI adoption.

    Most employees are driven by their inbox. Add a strategic AI project on top, and enthusiasm alone will not create capacity. Without structure, AI becomes a side project for one eager person while leadership has no visibility into the risks underneath.

    At TU Eindhoven, the Supercomputing Center grew its AI team from one engineer to five in eighteen months. Demand keeps rising. Researchers, educators, and now industry partners all want access to compute, but raw compute power is only half the story.

    Every platform is a race track. You need the right car for it. And someone who knows how to drive. When specialists work alongside researchers, efficiency gains of six times are common. Without that support, teams burn time learning what others already know.

    The question for any organization is not whether to build AI capability, but how. Centralized through a Center of Excellence? Distributed through a hub and spoke model? The answer depends on risk appetite, maturity, and speed.

    In this 39-minute discussion recorded at the Cisco Studio in Amsterdam, Nick Brummans (TU Eindhoven) and Vera Schut (NXT Minds) share what they have learned about building AI competencies that actually stick.

    Key topics include:

    • Why giving employees AI tools without structure leads to invisible risk and wasted effort.
    • The difference between a Center of Excellence, a hub and spoke model, and letting the business figure it out.
    • How TU Eindhoven onboards researchers onto advanced AI platforms, and what trips them up.
    • Why knowledge is a muscle that requires consistent training, not a one-time workshop.
    • What smaller companies can do faster than enterprises stuck on legacy systems.
    Show More Show Less
    40 mins
  • End-to-End Security
    Feb 19 2026

    You cannot secure what you cannot see.

    When cloud adoption started, employees picked their own tools without IT knowing. They called it Shadow IT. The same pattern is now repeating with AI.

    Developers pull models from Hugging Face because it is convenient. Over one hundred thousand models live there. Most have never been security checked.

    A well-known vendor recently published a PyTorch container with nearly one hundred documented vulnerabilities. Some can be patched. Some cannot.

    For AI, there is no Patch Tuesday yet.

    The risk goes beyond infrastructure. A model that answers questions can also leak data if you phrase the prompt differently. Securing containers is one discipline. Understanding what a model actually does is another.

    In this 35-minute discussion recorded at the Cisco Studio in Amsterdam, Michel Cosman (MDCS.AI) and Jan Heijdra (Cisco) examine what end-to-end security means when AI workloads enter production.

    Key topics include:

    • Why "Shadow AI" is becoming the new Shadow IT, and how organizations regain visibility.
    • The difference between securing infrastructure and securing model behavior.
    • How attackers fire 50,000 prompts at a model to find vulnerabilities, and how defenders can do the same.
    • What the EU AI Act demands in terms of auditability, and why it is no longer optional.
    • Why AI security needs to be a boardroom conversation, not an IT project.
    Show More Show Less
    36 mins
  • Episode 2: AI Infrastructure - Owned vs Cloud - The AI Journey of Trivium
    Feb 5 2026

    Many organizations wait for the right hardware, the right budget, or the right moment to begin investing in artificial intelligence.

    Trivium Packaging did not.

    They launched their first AI chatbot on a single server, without a GPU. Each response took approximately ten minutes. The system was slow, inelegant, and limited—but it functioned.

    Ten months later, Trivium operates a full Kubernetes cluster and has established an AI Center of Excellence. The procurement team now uses the system daily to translate contracts, while other departments are actively requesting access. The organization is already reaching the limits of its current infrastructure—a challenge that reflects success rather than failure.

    Key insight from Sebastian Van Duin: budgets are not approved by presentations. They are approved by working demonstrations—even imperfect ones.

    In this 32-minute discussion recorded at Cisco Studio Amsterdam, Sebastiaan van Duijn (Trivium Packaging) and Boris Vermaas (Cisco) explain how Trivium built its AI capabilities from the ground up.

    Topics discussed include:

    • The rationale for choosing an on-premises approach over cloud from the outset, and how this simplified security discussions.
    • The development of an internal “AI App Store” that ensures users only access applications appropriate to their roles.
    • The procurement team’s first reaction to the chatbot (“Is it a he or a she?”).
    • Why achieving 60–70% accuracy quickly is often more valuable than waiting indefinitely for a perfect solution.
    Show More Show Less
    32 mins
  • Episode 1: Simplicity
    Feb 4 2026

    Initiating AI workloads in the cloud is straightforward. GPUs can be provisioned quickly, experiments launched immediately, and early results demonstrated to leadership—without capital expenditure or procurement delays.

    The challenge emerges at scale.

    As systems move into production, costs escalate. Finance questions why cloud spend doubled last quarter. Security teams seek clarity on where sensitive training data resides. Machine learning engineers face compute bottlenecks despite significant allocated capacity.

    When failures occur, accountability becomes fragmented. With multiple vendors involved, resolution is slow and responsibility diffuse.

    What once took hours to deploy can take weeks to stabilize.

    In this 37-minute discussion recorded at Cisco Studio Amsterdam, Raymond Drielinger (MDCS.AI) and Jara Osterfeld (Cisco) examine what happens when AI workloads outgrow the cloud sandbox and enter enterprise reality.

    Key topics include:

    • Why GPUs remain underutilized in shared cloud environments while costs continue to accrue.
    • How “noisy neighbor” effects degrade model performance—and why identical workloads often run faster on-premises.
    • The difference between assembling hundreds of disconnected components and deploying an integrated, high-performance system engineered for immediate results.
    • How a single point of accountability replaces multi-vendor finger-pointing.

    A practical perspective on what it truly takes to scale AI beyond experimentation.

    Show More Show Less
    38 mins