
Episode 019: LLM Evaluation Frameworks
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to basket failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from Wish List failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
Lots of people like to talk about the importance of prompts, context, and what is sent to an LLM. Few discuss the even more important aspect of an LLM-driven system in evaluating its output.
In this episode, we discuss traditional and modern metrics used to evaluate LLM outputs. And, we review the common frameworks for obtaining that feedback.
Though evals are a lot of work (and easy to do poorly), those building (or buying) LLM-driven systems should be transparent about their process and the current state of their eval framework.
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.