Deterministic by Design: Why "Temp=0" Still Drifts and How to Fix It
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to basket failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from Wish List failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
Send us a text
Why do LLMs still give different answers even with temperature set to zero? In this episode of The Second Brain AI Podcast, we unpack new research from Thinking Machines Lab on defeating nondeterminism in LLM inference. We cover the surprising role of floating-point math, the real system-level culprit, lack of batch invariance, and how redesigned kernels can finally deliver bit-identical outputs. We also explore the trade-offs, real-world implications for testing and reliability, and how this breakthrough enables reproducible research and true on-policy reinforcement learning.
Sources:
- Defeating Nondeterminism in LLM Inference
- Non-Determinism of “Deterministic” LLM Settings
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.