“6 reasons why ‘alignment-is-hard’ discourse seems alien to human intuitions, and vice-versa” by Steven Byrnes
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to basket failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from Wish List failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
AI alignment has a culture clash. On one side, the “technical-alignment-is-hard” / “rational agents” school-of-thought argues that we should expect future powerful AIs to be power-seeking ruthless consequentialists. On the other side, people observe that both humans and LLMs are obviously capable of behaving like, well, not that. The latter group accuses the former of head-in-the-clouds abstract theorizing gone off the rails, while the former accuses the latter of mindlessly assuming that the future will always be the same as the present, rather than trying to understand things. “Alas, the power-seeking ruthless consequentialist AIs are still coming,” sigh the former. “Just you wait.”
As it happens, I’m basically in that “alas, just you wait” camp, expecting ruthless future AIs. But my camp faces a real question: what exactly is it about human brains[1] that allows them to not always act like power-seeking ruthless consequentialists? I find that existing explanations in the discourse—e.g. “ah but humans just aren’t smart and reflective enough”, or evolved modularity, or shard theory, etc.—to be wrong, handwavy, or otherwise unsatisfying.
So in this post, I offer my own explanation of why “agent foundations” toy models fail to describe humans, centering around a particular non-“behaviorist” [...]
---
Outline:
(00:13) Tl;dr
(03:35) 0. Background
(03:39) 0.1. Human social instincts and Approval Reward
(07:23) 0.2. Hang on, will future powerful AGI / ASI by default lack Approval Reward altogether?
(10:29) 0.3. Where do self-reflective (meta)preferences come from?
(12:38) 1. The human intuition that it's normal and good for one's goals & values to change over the years
(14:51) 2. The human intuition that ego-syntonic desires come from a fundamentally different place than urges
(17:53) 3. The human intuition that helpfulness, deference, and corrigibility are natural
(19:03) 4. The human intuition that unorthodox consequentialist planning is rare and sus
(23:53) 5. The human intuition that societal norms and institutions are mostly stably self-enforcing
(24:01) 5.1. Detour into Security-Mindset Institution Design
(26:22) 5.2. The load-bearing ingredient in human society is not Security-Mindset Institution Design, but rather good-enough institutions plus almost-universal human innate Approval Reward
(29:26) 5.3. Upshot
(30:49) 6. The human intuition that treating other humans as a resource to be callously manipulated and exploited, just like a car engine or any other complex mechanism in their environment, is a weird anomaly rather than the obvious default
(31:13) 7. Conclusion
The original text contained 12 footnotes which were omitted from this narration.
---
First published:
December 3rd, 2025
Source:
https://www.lesswrong.com/posts/d4HNRdw6z7Xqbnu5E/6-reasons-why-alignment-is-hard-discourse-seems-alien-to
---
Narrated by TYPE III AUDIO.
---
Images from the article:
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.