End-to-End Security
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
You cannot secure what you cannot see.
When cloud adoption started, employees picked their own tools without IT knowing. They called it Shadow IT. The same pattern is now repeating with AI.
Developers pull models from Hugging Face because it is convenient. Over one hundred thousand models live there. Most have never been security checked.
A well-known vendor recently published a PyTorch container with nearly one hundred documented vulnerabilities. Some can be patched. Some cannot.
For AI, there is no Patch Tuesday yet.
The risk goes beyond infrastructure. A model that answers questions can also leak data if you phrase the prompt differently. Securing containers is one discipline. Understanding what a model actually does is another.
In this 35-minute discussion recorded at the Cisco Studio in Amsterdam, Michel Cosman (MDCS.AI) and Jan Heijdra (Cisco) examine what end-to-end security means when AI workloads enter production.
Key topics include:
- Why "Shadow AI" is becoming the new Shadow IT, and how organizations regain visibility.
- The difference between securing infrastructure and securing model behavior.
- How attackers fire 50,000 prompts at a model to find vulnerabilities, and how defenders can do the same.
- What the EU AI Act demands in terms of auditability, and why it is no longer optional.
- Why AI security needs to be a boardroom conversation, not an IT project.