Hallucinations in LLMs: When AI Makes Things Up & How to Stop It cover art

Hallucinations in LLMs: When AI Makes Things Up & How to Stop It

Hallucinations in LLMs: When AI Makes Things Up & How to Stop It

Listen for free

View show details

About this listen

Send us a text

In this episode, we explore why large language models hallucinate and why those hallucinations might actually be a feature, not a bug. Drawing on new research from OpenAI, we break down the science, explain key concepts, and share what this means for the future of AI and discovery.

Sources:

  • "Why Language Models Hallucinate" (OpenAI)
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.