Grok, deepfakes and who should police AI
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to basket failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from Wish List failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
What happens when AI gets it wrong? After a backlash over the misuse of Elon Musk’s AI tool Grok, new restrictions have been imposed on editing images of real people. Is this a sign that AI regulation is lagging, and who should be in charge – governments or Silicon Valley? This week, Danny and Katie are joined by AI computer scientist Kate Devlin from King’s College London to discuss why this moment could be a turning point for global AI rules.
Image: Getty
Hosted on Acast. See acast.com/privacy for more information.
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.