Meta Tightens AI Chatbot Safety for Teens as China Enforces AI Content Labels cover art

Meta Tightens AI Chatbot Safety for Teens as China Enforces AI Content Labels

Meta Tightens AI Chatbot Safety for Teens as China Enforces AI Content Labels

Listen for free

View show details

About this listen

In this episode, we discuss Meta’s urgent new safety measures for AI chatbots following alarming findings about inappropriate interactions with teen users. We dive into the Reuters investigation that drove Meta to retrain its bots and restrict access to certain characters for minors. We also examine a groundbreaking study from the University of Pennsylvania exposing how psychological tactics can jailbreak large language models like GPT 4o mini, bypassing technical safety features. Finally, we break down China’s sweeping new AI content labeling rules, now in force, that set a global standard for transparency in AI-generated text, images, and media. Tune in for expert analysis on Meta’s AI policy overhaul, the vulnerabilities of large language models, and the global impact of China’s AI regulations.

https://www.aiconvocast.com


Help support the podcast by using our affiliate links:

Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv


Disclaimer:

This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Meta, OpenAI, Reuters, The Verge, Xinhua, or any other entities mentioned unless explicitly stated. The content provided is for informational and entertainment purposes only and does not constitute professional or technical advice. Affiliate links may generate a commission that helps support the show. All trademarks and copyrights are the property of their respective owners.

No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.