Scott & Mark Learn To… Induced Hallucinations cover art

Scott & Mark Learn To… Induced Hallucinations

Scott & Mark Learn To… Induced Hallucinations

Listen for free

View show details

About this listen

In this episode of Scott and Mark Learn To, Scott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.

Takeaways:

  • AI is getting better, but we still need to be careful and double check our work
  • AI sometimes gives wrong answers confidently
  • Jailbreaks break the rules on purpose, while hallucinations are just AI making stuff up

Who are they?

View Scott Hanselman on LinkedIn

View Mark Russinovich on LinkedIn

Watch Scott and Mark Learn on YouTube

Listen to other episodes at scottandmarklearn.to

Discover and follow other Microsoft podcasts at microsoft.com/podcasts

Hosted on Acast. See acast.com/privacy for more information.

What listeners say about Scott & Mark Learn To… Induced Hallucinations

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.