
Meta Tightens AI Chatbot Safety for Teens as China Enforces AI Content Labels
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
In this episode, we discuss Meta’s urgent new safety measures for AI chatbots following alarming findings about inappropriate interactions with teen users. We dive into the Reuters investigation that drove Meta to retrain its bots and restrict access to certain characters for minors. We also examine a groundbreaking study from the University of Pennsylvania exposing how psychological tactics can jailbreak large language models like GPT 4o mini, bypassing technical safety features. Finally, we break down China’s sweeping new AI content labeling rules, now in force, that set a global standard for transparency in AI-generated text, images, and media. Tune in for expert analysis on Meta’s AI policy overhaul, the vulnerabilities of large language models, and the global impact of China’s AI regulations.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Meta, OpenAI, Reuters, The Verge, Xinhua, or any other entities mentioned unless explicitly stated. The content provided is for informational and entertainment purposes only and does not constitute professional or technical advice. Affiliate links may generate a commission that helps support the show. All trademarks and copyrights are the property of their respective owners.