Anthropic Interpretability, GPT-4 Image Gen, Latent Reasoning, Synthetic Data & more | EP.39 cover art

Anthropic Interpretability, GPT-4 Image Gen, Latent Reasoning, Synthetic Data & more | EP.39

Anthropic Interpretability, GPT-4 Image Gen, Latent Reasoning, Synthetic Data & more | EP.39

Listen for free

View show details

About this listen

In this episode of Hidden Layers, Ron Green talks with Dr. ZZ Si, Michael Wharton, and Reed Coke about recent AI developments. They cover Anthropic’s work on Claude 3.5 and model interpretability, OpenAI’s GPT-4 image generation and its underlying architecture, and a new approach to latent reasoning from the Max Planck Institute. They also discuss synthetic data in light of NVIDIA’s acquisition of Gretel AI and reflect on the delayed rollout of Apple Intelligence. The conversation explores what these advances reveal about how AI models reason, behave, and can (or can’t) be controlled.

No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.