From Tokens to Vectors: The Efficiency Hack That Could Save AI (Ep. 294)
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to basket failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from Wish List failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
LLMs generate text painfully slow, one low-info token at a time. Researchers just figured out how to compress 4 tokens into smart vectors & cut costs by 44%—with full code & proofs! Meanwhile OpenAI drops product ads, not papers. We explore CALM & why open science matters. 🔥📊
Sponsors
This episode is brought to you by Statistical Horizons At Statistical Horizons, you can stay ahead with expert-led livestream seminars that make data analytics and AI methods practical and accessible. Join thousands of researchers and professionals who’ve advanced their careers with Statistical Horizons. Get $200 off any seminar with code DATA25 at https://statisticalhorizons.com
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.