Ep67: Why RAG Fails LLMs – And How to Finally Fix It cover art

Ep67: Why RAG Fails LLMs – And How to Finally Fix It

Ep67: Why RAG Fails LLMs – And How to Finally Fix It

Listen for free

View show details

About this listen

AI is lying to you—here’s why. Retrieval-Augmented Generation (RAG) was supposed to fix AI hallucinations, but it’s failing. In this episode, we break down the limitations of naïve RAG, the rise of dense retrieval, and how new approaches like Agentic RAG, RePlug, and RAG Fusion are revolutionizing AI search accuracy.

🔍 Key Insights:

  • Why naïve RAG fails and leads to bad retrieval
  • How Contriever & Dense Retrieval improve accuracy
  • RePlug’s approach to refining AI queries
  • Why RAG Fusion is a game-changer for AI search
  • The future of AI retrieval beyond vector databases

If you’ve ever wondered why LLMs still struggle with real knowledge retrieval, this is the episode you need!

🎧 Listen now and stay ahead in AI!


References:

  1. [2005.11401] Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

  2. [2112.09118] Unsupervised Dense Information Retrieval with Contrastive Learning

  3. [2301.12652] REPLUG: Retrieval-Augmented Black-Box Language Models

  4. [2402.03367] RAG-Fusion: a New Take on Retrieval-Augmented Generation

  5. [2312.10997] Retrieval-Augmented Generation for Large Language Models: A Survey






No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.