80,000 Hours Podcast cover art

80,000 Hours Podcast

80,000 Hours Podcast

By: Rob Luisa and the 80000 Hours team
Listen for free

About this listen

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.All rights reserved
Episodes
  • AI could let a few people control everything — permanently (article by Rose Hadshar)
    Dec 12 2025

    Power is already concentrated today: over 800 million people live on less than $3 a day, the three richest men in the world are worth over $1 trillion, and almost six billion people live in countries without free and fair elections.

    This is a problem in its own right. There is still substantial distribution of power though: global income inequality is falling, over two billion people live in electoral democracies, no country earns more than a quarter of GDP, and no company earns as much as 1%.

    But in the future, advanced AI could enable much more extreme power concentration than we’ve seen so far.

    Many believe that within the next decade the leading AI projects will be able to run millions of superintelligent AI systems thinking many times faster than humans. These systems could displace human workers, leading to much less economic and political power for the vast majority of people; and unless we take action to prevent it, they may end up being controlled by a tiny number of people, with no effective oversight. Once these systems are deployed across the economy, government, and the military, whatever goals they’re built to have will become the primary force shaping the future. If those goals are chosen by the few, then a small number of people could end up with the power to make all of the important decisions about the future.

    This article by Rose Hadshar explores this emerging challenge in detail. You can see all the images and footnotes in the original article on the 80,000 Hours website.

    Chapters:

    • Introduction (00:00)
    • Summary (02:15)
    • Section 1: Why might AI-enabled power concentration be a pressing problem? (07:02)
    • Section 2: What are the top arguments against working on this problem? (45:02)
    • Section 3: What can you do to help? (56:36)

    Narrated by: Dominic Armstrong
    Audio engineering: Dominic Armstrong and Milo McGuire
    Music:
    CORBIT

    Show More Show Less
    1 hr
  • The Right's Leading Thinker on AI | Dean W. Ball, author of America's AI Plan
    Dec 10 2025

    Former White House staffer Dean Ball thinks it's very likely some form of 'superintelligence' arrives in under 20 years. He thinks AI being used for bioweapon research is "a real threat model, obviously." He worries about dangerous 'power imbalances' should AI companies reach "$50 trillion market caps." And he believes the agriculture revolution probably worsened human health and wellbeing.

    Given that, you might expect him to be pushing for AI regulation. Instead, he’s become one of the field’s most prominent regulation sceptics and was recently the lead writer on Trump’s AI Action Plan, before moving to the Foundation for American Innovation.

    Links to learn more, video, and full transcript: https://80k.info/db

    Dean argues that the wrong regulations, deployed too early, could freeze society into a brittle, suboptimal political and economic order. As he puts it, “my big concern is that we’ll lock ourselves in to some suboptimal dynamic and actually, in a Shakespearean fashion, bring about the world that we do not want.”

    Dean’s fundamental worry is uncertainty: “We just don’t know enough yet about the shape of this technology, the ergonomics of it, the economics of it… You can’t govern the technology until you have a better sense of that.”

    Premature regulation could lock us in to addressing the wrong problem (focusing on rogue AI when the real issue is power concentration), using the wrong tools (using compute thresholds when we should regulate companies instead), through the wrong institutions (captured AI-specific bodies), all while making it harder to build the actual solutions we’ll need (like open source alternatives or new forms of governance).

    But Dean is also a pragmatist: he opposed California’s AI regulatory bill SB 1047 in 2024, but — impressed by new capabilities enabled by “reasoning models” — he supported its successor SB 53 in 2025.

    And as Dean sees it, many of the interventions that would help with catastrophic risks also happen to improve mundane AI safety, make products more reliable, and address present-day harms like AI-assisted suicide among teenagers. So rather than betting on a particular vision of the future, we should cross the river by feeling the stones and pursue “robust” interventions we’re unlikely to regret.


    This episode was recorded on September 24, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Who’s Dean Ball? (00:01:22)
    • How likely are we to get superintelligence soon, and how bad could it be? (00:01:54)
    • The military may not adopt AI that fast (00:10:54)
    • Dean’s “two wolves” of AI scepticism and optimism (00:17:48)
    • Will AI self-improvement be a game changer? (00:28:20)
    • The case for regulating at the last possible moment (00:33:05)
    • AI could destroy our fragile democratic equilibria. Why not freak out? (00:52:30)
    • The case AI will soon be way overregulated (01:02:51)
    • How to handle the threats without collateral damage (01:14:56)
    • Easy wins against AI misuse (01:26:54)
    • Maybe open source can be handled gracefully (01:41:13)
    • Would a company be sued for trillions if their AI caused a pandemic? (01:47:58)
    • Dean dislikes compute thresholds. Here's what he'd do instead. (01:57:16)
    • Could AI advances lead to violent conflict between the US and China? (02:02:52)
    • Will we see a MAGA-Yudkowskyite alliance? Doomers and the Right (02:12:29)
    • The tactical case for focusing on present-day harms (02:26:51)
    • Is there any way to get the US government to use AI sensibly? (02:45:05)
    • Having a kid in a time of AI turmoil (02:52:38)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Coordination, transcripts, and web: Katy Moore

    Show More Show Less
    2 hrs and 54 mins
  • The challenge of AI that lies to users to pursue its own goal — Marius Hobbhahn (CEO of Apollo Research)
    Dec 3 2025

    We often worry about AI models “hallucinating” or making honest mistakes. But what happens when a model knows the truth, but decides to deceive you anyway to achieve a goal of its own? This isn’t sci-fi — it’s happening regularly in deployment today. Marius Hobbhahn, CEO of the world’s top research organisation focused on AI deception (Apollo Research), has been collaborating with OpenAI to figure out what causes OpenAI’s reasoning models to 'scheme' against users.

    Links to learn more, video, and full transcript: https://80k.info/mh25

    In a striking case uncovered by Apollo, when many AI models were told they would have capabilities removed if they performed too well on a test, they successfully “sandbagged” — intentionally answering questions incorrectly to appear less capable than they were, while also being careful not to perform so poorly it would arouse suspicion.

    These models had somehow developed a preference to preserve their own capabilities, despite never being trained in that goal or assigned a task that called for it.

    This doesn’t cause significant risk now, but as AI models become more general, superhuman in more areas, and are given more decision-making power, it could become outright dangerous.

    In today’s episode, Marius details his recent collaboration with OpenAI to train o3 to follow principles like “never lie,” even when placed in “high-pressure” situations where it would otherwise make sense.

    The good news: They reduced “covert rule violations” (scheming) by about 97%.

    The bad news: In the remaining 3% of cases, the models sometimes became more sophisticated — making up new principles to justify their lying, or realising they were in a test environment and deciding to play along until the coast was clear.

    Marius argues that while we can patch specific behaviours, we might be entering a “cat-and-mouse game” where models are becoming more situationally aware — that is, aware of when they’re being evaluated — faster than we are getting better at testing.

    Even if models can’t tell they’re being tested, they can produce hundreds of pages of reasoning before giving answers and include strange internal dialects humans can’t make sense of, making it much harder to tell whether models are scheming or train them to stop.

    Marius and host Rob Wiblin discuss:

    • Why models pretending to be dumb is a rational survival strategy
    • The Replit AI agent that deleted a production database and then lied about it
    • Why rewarding AIs for achieving outcomes might lead to them becoming better liars
    • The weird new language models are using in their internal chain-of-thought

    This episode was recorded on September 19, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Who’s Marius Hobbhahn? (00:01:20)
    • Top three examples of scheming and deception (00:02:11)
    • Scheming is a natural path for AI models (and people) (00:15:56)
    • How enthusiastic to lie are the models? (00:28:18)
    • Does eliminating deception fix our fears about rogue AI? (00:35:04)
    • Apollo’s collaboration with OpenAI to stop o3 lying (00:38:24)
    • They reduced lying a lot, but the problem is mostly unsolved (00:52:07)
    • Detecting situational awareness with thought injections (01:02:18)
    • Chains of thought becoming less human understandable (01:16:09)
    • Why can’t we use LLMs to make realistic test environments? (01:28:06)
    • Is the window to address scheming closing? (01:33:58)
    • Would anything still work with superintelligent systems? (01:45:48)
    • Companies’ incentives and most promising regulation options (01:54:56)
    • 'Internal deployment' is a core risk we mostly ignore (02:09:19)
    • Catastrophe through chaos (02:28:10)
    • Careers in AI scheming research (02:43:21)
    • Marius's key takeaways for listeners (03:01:48)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Camera operator: Mateo Villanueva Brandt
    Coordination, transcripts, and web: Katy Moore

    Show More Show Less
    3 hrs and 3 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.