
When AI Gets It Wrong: Claude’s Legal Hallucination and What It Means for Law
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
In this episode, Jaeden and Conor dive into a recent incident where Anthropic's AI, Claude, generated a fabricated legal citation, prompting an apology from the company’s legal team. They examine the broader implications of AI hallucinations within the legal field, the critical importance of verifying sources, and how AI—when used responsibly—can significantly boost legal productivity. The conversation also explores how legal and business professionals can adapt their mindset to integrate AI tools effectively into their workflows.
Chapters
00:00 The Hilarious AI Hallucination Incident
02:53 The Impact of AI on the Legal Industry
05:42 Navigating AI Hallucinations in Professional Settings
08:37 The Future of AI in Law and Beyond
AI Applied YouTube Channel: https://www.youtube.com/@AI-Applied-Podcast
Try AI Box: https://AIBox.ai/
Conor’s AI Course: https://www.ai-mindset.ai/courses
Conor’s AI Newsletter: https://www.ai-mindset.ai/
Jaeden’s AI Hustle Community: https://www.skool.com/aihustle/about