
Defining AI safety
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to basket failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from Wish List failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
Ed and David chat with Professor Ibrahim Habli, Research Director at the Centre for Assuring Autonomy in the University of York, and director of the UKRI Centre for Doctoral Training in Safe AI Systems. The conversation covers the topic of defining and contextualising AI safety and risk, given existence of existing safety practices from other industries. Ibrahim has collaborated with The Alan Turing Institute on the "Trustworthy and Ethical Assurance platform", or "TEA" for short, an open-source tool for developing and communicating structured assurance arguments to show how data science and AI tech adheres to ethical principles.
What listeners say about Defining AI safety
Average Customer RatingsReviews - Please select the tabs below to change the source of reviews.
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.