Controlling AI Models from the Inside cover art

Controlling AI Models from the Inside

Controlling AI Models from the Inside

Listen for free

View show details

About this listen

As generative AI moves into production, traditional guardrails and input/output filters can prove too slow, too expensive, and/or too limited. In this episode, Alizishaan Khatri of Wrynx joins Daniel and Chris to explore a fundamentally different approach to AI safety and interpretability. They unpack the limits of today’s black-box defenses, the role of interpretability, and how model-native, runtime signals can enable safer AI systems.

Featuring:

  • Alizishaan Khatri – LinkedIn
  • Chris Benson – Website, LinkedIn, Bluesky, GitHub, X
  • Daniel Whitenack – Website, GitHub, X

Upcoming Events:

  • Register for upcoming webinars here!
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.