#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk cover art

#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk

#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk

Listen for free

View show details

About this listen

Gabe Alfour is a co-founder of Conjecture and an advisor to Control AI, both organisations working to reduce risks from advanced AI.

We discussed why AI poses an existential risk to humanity, what makes this problem very hard to solve, why Gabe believes we need to prevent the development of superintelligence for at least the next two decades, and more.

Follow Gabe on Twitter

Read The Compendium and A Narrow Path





What listeners say about #19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.