• “Underdog bias rules everything around me” by Richard_Ngo
    Aug 23 2025
    People very often underrate how much power they (and their allies) have, and overrate how much power their enemies have. I call this “underdog bias”, and I think it's the most important cognitive bias for understanding modern society.

    I’ll start by describing a closely-related phenomenon. The hostile media effect is a well-known bias whereby people tend to perceive news they read or watch as skewed against their side. For example, pro-Palestinian students shown a video clip tended to judge that the clip would make viewers more pro-Israel, while pro-Israel students shown the same clip thought it’d make viewers more pro-Palestine. Similarly, sports fans often see referees as being biased against their own team.

    The hostile media effect is particularly striking because it arises in settings where there's relatively little scope for bias. People watching media clips and sports are all seeing exactly the same videos. And sports in particular [...]

    ---

    Outline:

    (03:31) Underdog bias in practice

    (09:07) Why underdog bias?

    ---

    First published:
    August 17th, 2025

    Source:
    https://www.lesswrong.com/posts/f3zeukxj3Kf5byzHi/underdog-bias-rules-everything-around-me

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    13 mins
  • “Epistemic advantages of working as a moderate” by Buck
    Aug 22 2025
    Many people who are concerned about existential risk from AI spend their time advocating for radical changes to how AI is handled. Most notably, they advocate for costly restrictions on how AI is developed now and in the future, e.g. the Pause AI people or the MIRI people. In contrast, I spend most of my time thinking about relatively cheap interventions that AI companies could implement to reduce risk assuming a low budget, and about how to cause AI companies to marginally increase that budget. I'll use the words "radicals" and "moderates" to refer to these two clusters of people/strategies. In this post, I’ll discuss the effect of being a radical or a moderate on your epistemics.

    I don’t necessarily disagree with radicals, and most of the disagreement is unrelated to the topic of this post; see footnote for more on this.[1]

    I often hear people claim that being [...]

    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    August 20th, 2025

    Source:
    https://www.lesswrong.com/posts/9MaTnw5sWeQrggYBG/epistemic-advantages-of-working-as-a-moderate

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    6 mins
  • “Four ways Econ makes people dumber re: future AI” by Steven Byrnes
    Aug 21 2025
    (Cross-posted from X, intended for a general audience.)

    There's a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts & frames that are great for most things, but counterproductive for thinking about AGI. Here are 4 examples. Longpost:

    THE FIRST PIECE of Econ anti-pedagogy is hiding in the words “labor” & “capital”. These words conflate a superficial difference (flesh-and-blood human vs not) with a bundle of unspoken assumptions and intuitions, which will all get broken by Artificial General Intelligence (AGI).

    By “AGI” I mean here “a bundle of chips, algorithms, electricity, and/or teleoperated robots that can autonomously do the kinds of stuff that ambitious human adults can do—founding and running new companies, R&D, learning new skills, using arbitrary teleoperated robots after very little practice, etc.”

    Yes I know, this does not exist yet! (Despite hype to the contrary.) Try asking [...]

    ---

    Outline:

    (08:50) Tweet 2

    (09:19) Tweet 3

    (10:16) Tweet 4

    (11:15) Tweet 5

    (11:31) 1.3.2 Three increasingly-radical perspectives on what AI capability acquisition will look like

    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    August 21st, 2025

    Source:
    https://www.lesswrong.com/posts/xJWBofhLQjf3KmRgg/four-ways-econ-makes-people-dumber-re-future-ai

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    14 mins
  • “Should you make stone tools?” by Alex_Altair
    Aug 21 2025
    Knowing how evolution works gives you an enormously powerful tool to understand the living world around you and how it came to be that way. (Though it's notoriously hard to use this tool correctly, to the point that I think people mostly shouldn't try it use it when making substantial decisions.) The simple heuristic is "other people died because they didn't have this feature". A slightly less simple heuristic is "other people didn't have as many offspring because they didn't have this feature".

    So sometimes I wonder about whether this thing or that is due to evolution. When I walk into a low-hanging branch, I'll flinch away before even consciously registering it, and afterwards feel some gratefulness that my body contains such high-performing reflexes. Eyes, it turns out, are extremely important; the inset socket, lids, lashes, brows, and blink reflexes are all hard-earned hard-coded features. On the other side [...]

    ---

    First published:
    August 14th, 2025

    Source:
    https://www.lesswrong.com/posts/bkjqfhKd8ZWHK9XqF/should-you-make-stone-tools

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    6 mins
  • “My AGI timeline updates from GPT-5 (and 2025 so far)” by ryan_greenblatt
    Aug 21 2025
    As I discussed in a prior post, I felt like there were some reasonably compelling arguments for expecting very fast AI progress in 2025 (especially on easily verified programming tasks). Concretely, this might have looked like reaching 8 hour 50% reliability horizon lengths on METR's task suite[1] by now due to greatly scaling up RL and getting large training runs to work well. In practice, I think we've seen AI progress in 2025 which is probably somewhat faster than the historical rate (at least in terms of progress on agentic software engineering tasks), but not much faster. And, despite large scale-ups in RL and now seeing multiple serious training runs much bigger than GPT-4 (including GPT-5), this progress didn't involve any very large jumps.

    The doubling time for horizon length on METR's task suite has been around 135 days this year (2025) while it was more like 185 [...]

    The original text contained 5 footnotes which were omitted from this narration.

    ---

    First published:
    August 20th, 2025

    Source:
    https://www.lesswrong.com/posts/2ssPfDpdrjaM2rMbn/my-agi-timeline-updates-from-gpt-5-and-2025-so-far-1

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    7 mins
  • “Hyperbolic model fits METR capabilities estimate worse than exponential model” by gjm
    Aug 20 2025
    This is a response to https://www.lesswrong.com/posts/mXa66dPR8hmHgndP5/hyperbolic-trend-with-upcoming-singularity-fits-metr which claims that a hyperbolic model, complete with an actual singularity in the near future, is a better fit for the METR time-horizon data than a simple exponential model.

    I think that post has a serious error in it and its conclusions are the reverse of correct. Hence this one.

    (An important remark: although I think Valentin2026 made an important mistake that invalidates his conclusions, I think he did an excellent thing in (1) considering an alternative model, (2) testing it, (3) showing all his working, and (4) writing it up clearly enough that others could check his work. Please do not take any part of this post as saying that Valentin2026 is bad or stupid or any nonsense like that. Anyone can make a mistake; I have made plenty of equally bad ones myself.)

    The models

    Valentin2026's post compares the results of [...]

    ---

    Outline:

    (01:02) The models

    (02:32) Valentin2026s fits

    (03:29) The problem

    (05:11) Fixing the problem

    (06:15) Conclusion

    ---

    First published:
    August 19th, 2025

    Source:
    https://www.lesswrong.com/posts/ZEuDH2W3XdRaTwpjD/hyperbolic-model-fits-metr-capabilities-estimate-worse-than

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    8 mins
  • “My Interview With Cade Metz on His Reporting About Lighthaven” by Zack_M_Davis
    Aug 18 2025
    On 12 August 2025, I sat down with New York Times reporter Cade Metz to discuss some criticisms of his 4 August 2025 article, "The Rise of Silicon Valley's Techno-Religion". The transcript below has been edited for clarity.

    ZMD: In accordance with our meetings being on the record in both directions, I have some more questions for you.

    I did not really have high expectations about the August 4th article on Lighthaven and the Secular Solstice. The article is actually a little bit worse than I expected, in that you seem to be pushing a "rationalism as religion" angle really hard in a way that seems inappropriately editorializing for a news article.

    For example, you write, quote,

    Whether they are right or wrong in their near-religious concerns about A.I., the tech industry is reckoning with their beliefs.

    End quote. What is the word "near-religious" [...]

    ---

    First published:
    August 17th, 2025

    Source:
    https://www.lesswrong.com/posts/JkrkzXQiPwFNYXqZr/my-interview-with-cade-metz-on-his-reporting-about

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    10 mins
  • “Church Planting: When Venture Capital Finds Jesus” by Elizabeth
    Aug 18 2025
    I’m going to describe a Type Of Guy starting a business, and you’re going to guess the business:

    1. The founder is very young, often under 25.
    2. He might work alone or with a founding team, but when he tells the story of the founding it will always have him at the center.
    3. He has no credentials for this business.
    4. This business has a grand vision, which he thinks is the most important thing in the world.
    5. This business lives and dies by its growth metrics.
    6. 90% of attempts in this business fail, but he would never consider that those odds apply to him
    7. He funds this business via a mix of small contributors, large networks pooling their funds, and major investors.
    8. Disagreements between founders are one of the largest contributors to failure.
    9. Funders invest for a mix of truly [...]
    ---

    Outline:

    (03:15) What is Church Planting?

    (04:06) The Planters

    (07:45) The Goals

    (09:54) The Funders

    (12:45) The Human Cost

    (14:03) The Life Cycle

    (17:41) The Theology

    (18:37) The Failures

    (21:10) The Alternatives

    (22:25) The Attendees

    (25:40) The Supporters

    (25:43) Wives

    (26:41) Support Teams

    (27:32) Mission Teams

    (28:06) Conclusion

    (29:12) Sources

    (29:15) Podcasts

    (30:19) Articles

    (30:37) Books

    (30:44) Thanks

    ---

    First published:
    August 16th, 2025

    Source:
    https://www.lesswrong.com/posts/NMoNLfX3ihXSZJwqK/church-planting-when-venture-capital-finds-jesus

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show More Show Less
    31 mins