LessWrong (Curated & Popular) cover art

LessWrong (Curated & Popular)

LessWrong (Curated & Popular)

By: LessWrong
Listen for free

About this listen

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.

If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

© 2025 LessWrong (Curated & Popular)
Philosophy Social Sciences
Episodes
  • “The Industrial Explosion” by rosehadshar, Tom Davidson
    Jul 7 2025
    Summary

    To quickly transform the world, it's not enough for AI to become super smart (the "intelligence explosion").

    AI will also have to turbocharge the physical world (the "industrial explosion"). Think robot factories building more and better robot factories, which build more and better robot factories, and so on.

    The dynamics of the industrial explosion has gotten remarkably little attention.

    This post lays out how the industrial explosion could play out, and how quickly it might happen.

    We think the industrial explosion will unfold in three stages:

    1. AI-directed human labour, where AI-directed human labourers drive productivity gains in physical capabilities.
      1. We argue this could increase physical output by 10X within a few years.
    2. Fully autonomous robot factories, where AI-directed robots (and other physical actuators) replace human physical labour.
      1. We argue that, with current physical technology and full automation of cognitive labour, this physical infrastructure [...]
    ---

    Outline:

    (00:10) Summary

    (01:43) Intro

    (04:14) The industrial explosion will start after the intelligence explosion, and will proceed more slowly

    (06:50) Three stages of industrial explosion

    (07:38) AI-directed human labour

    (09:20) Fully autonomous robot factories

    (12:04) Nanotechnology

    (13:06) How fast could an industrial explosion be?

    (13:41) Initial speed

    (16:21) Acceleration

    (17:38) Maximum speed

    (20:01) Appendices

    (20:05) How fast could robot doubling times be initially?

    (27:47) How fast could robot doubling times accelerate?

    ---

    First published:
    June 26th, 2025

    Source:
    https://www.lesswrong.com/posts/Na2CBmNY7otypEmto/the-industrial-explosion

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Show More Show Less
    32 mins
  • “Race and Gender Bias As An Example of Unfaithful Chain of Thought in the Wild” by Adam Karvonen, Sam Marks
    Jul 3 2025

    Summary: We found that LLMs exhibit significant race and gender bias in realistic hiring scenarios, but their chain-of-thought reasoning shows zero evidence of this bias. This serves as a nice example of a 100% unfaithful CoT "in the wild" where the LLM strongly suppresses the unfaithful behavior. We also find that interpretability-based interventions succeeded while prompting failed, suggesting this may be an example of interpretability being the best practical tool for a real world problem.

    For context on our paper, the tweet thread is here and the paper is here.

    Context: Chain of Thought Faithfulness Chain of Thought (CoT) monitoring has emerged as a popular research area in AI safety. The idea is simple - have the AIs reason in English text when solving a problem, and monitor the reasoning for misaligned behavior. For example, OpenAI recently published a paper on using CoT monitoring to detect reward hacking during [...]



    ---

    Outline:

    (00:49) Context: Chain of Thought Faithfulness

    (02:26) Our Results

    (04:06) Interpretability as a Practical Tool for Real-World Debiasing

    (06:10) Discussion and Related Work

    ---

    First published:
    July 2nd, 2025

    Source:
    https://www.lesswrong.com/posts/me7wFrkEtMbkzXGJt/race-and-gender-bias-as-an-example-of-unfaithful-chain-of

    ---



    Narrated by TYPE III AUDIO.


    Show More Show Less
    8 mins
  • “The best simple argument for Pausing AI?” by Gary Marcus
    Jul 3 2025
    Not saying we should pause AI, but consider the following argument:

    1. Alignment without the capacity to follow rules is hopeless. You can’t possibly follow laws like Asimov's Laws (or better alternatives to them) if you can’t reliably learn to abide by simple constraints like the rules of chess.
    2. LLMs can’t reliably follow rules. As discussed in Marcus on AI yesterday, per data from Mathieu Acher, even reasoning models like o3 in fact empirically struggle with the rules of chess. And they do this even though they can explicit explain those rules (see same article). The Apple “thinking” paper, which I have discussed extensively in 3 recent articles in my Substack, gives another example, where an LLM can’t play Tower of Hanoi with 9 pegs. (This is not a token-related artifact). Four other papers have shown related failures in compliance with moderately complex rules in the last month.
    3. [...]

    ---

    First published:
    June 30th, 2025

    Source:
    https://www.lesswrong.com/posts/Q2PdrjowtXkYQ5whW/the-best-simple-argument-for-pausing-ai

    ---



    Narrated by TYPE III AUDIO.

    Show More Show Less
    2 mins

What listeners say about LessWrong (Curated & Popular)

Average Customer Ratings
Overall
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Performance
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Story
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.