Episodes

  • Book Review: If Anyone Builds It, Everyone Dies
    Sep 12 2025

    I.

    Eliezer Yudkowsky’s Machine Intelligence Research Institute is the original AI safety org. But the original isn’t always the best - how is Mesopotamia doing these days? As money, brainpower, and prestige pour into the field, MIRI remains what it always was - a group of loosely-organized weird people, one of whom cannot be convinced to stop wearing a sparkly top hat in public. So when I was doing AI grantmaking last year, I asked them - why should I fund you instead of the guys with the army of bright-eyed Harvard grads, or the guys who just got Geoffrey Hinton as their celebrity spokesperson? What do you have that they don’t?

    MIRI answered: moral clarity.

    Most people in AI safety (including me) are uncertain and confused and looking for least-bad incremental solutions. We think AI will probably be an exciting and transformative technology, but there’s some chance, 5 or 15 or 30 percent, that it might turn against humanity in a catastrophic way. Or, if it doesn’t, that there will be something less catastrophic but still bad - maybe humanity gradually fading into the background, the same way kings and nobles faded into the background during the modern era. This is scary, but AI is coming whether we like it or not, and probably there are also potential risks from delaying too hard. We’re not sure exactly what to do, but for now we want to build a firm foundation for reacting to any future threat. That means keeping AI companies honest and transparent, helping responsible companies like Anthropic stay in the race, and investing in understanding AI goal structures and the ways that AIs interpret our commands. Then at some point in the future, we’ll be close enough to the actually-scary AI that we can understand the threat model more clearly, get more popular buy-in, and decide what to do next.

    MIRI thinks this is pathetic - like trying to protect against an asteroid impact by wearing a hard hat. They’re kind of cagey about their own probability of AI wiping out humanity, but it seems to be somewhere around 95 - 99%. They think plausibly-achievable gains in company responsibility, regulation quality, and AI scholarship are orders of magnitude too weak to seriously address the problem, and they don’t expect enough of a “warning shot” that they feel comfortable kicking the can down the road until everything becomes clear and action is easy. They suggest banning all AI capabilities research immediately, to be restarted only in some distant future when the situation looks more promising.

    Both sides honestly believe their position and don’t want to modulate their message for PR reasons. But both sides, coincidentally, think that their message is better PR. The incrementalists think a moderate, cautious approach keeps bridges open with academia, industry, government, and other actors that prefer normal clean-shaven interlocutors who don’t emit spittle whenever they talk. MIRI thinks that the public is sick of focus-group-tested mealy-mouthed bullshit, but might be ready to rise up against AI if someone presented the case in a clear and unambivalent way.

    Now Yudkowsky and his co-author, MIRI president Nate Soares, have reached new heights of unambivalence with their new book, If Anyone Builds It, Everyone Dies (release date September 16, currently available for preorder).

    https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone

    Show More Show Less
    42 mins
  • Links For September 2025
    Sep 10 2025

    [I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]

    https://www.astralcodexten.com/p/links-for-september-2025

    Show More Show Less
    44 mins
  • Your Review: Participation in Phase I Clinical Pharmaceutical Research
    Sep 10 2025

    [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]

    If you’ve been following this blog for long, you probably know at least a bit about pharmaceutical research. You might know a bit about the sort of subtle measures pharmaceutical companies take to influence doctors’ prescribing habits, or how it takes billions of dollars on average to bring a new medication to market, or something about the perverse incentives which determine the FDA’s standards for accepting or rejecting a new drug. You might have some idea what kinds of hoops a company has to jump through to conduct actual research which meets legal guidelines for patient safety and autonomy.

    You may be less familiar though, with how the sausage is actually made. How do pharmaceutical companies actually go through the process of testing a drug on human participants?

    I’m going to be focusing here on a research subject’s view of what are known as Phase I clinical trials, the stage in which prospective drugs are tested for safety and tolerability. This is where researchers aim to answer questions like “Does this drug have any dangerous side effects?” “Through what pathways is it removed from a patient’s body?” and “Can we actually give people enough of this drug that it’s useful for anything?” This comes before the stage where researchers test how good a drug is at actually treating any sort of disease, when patients who’re suffering from the target ailments are given the option receive it as an experimental treatment. In Phase I clinical trials, the participants are healthy volunteers who’re participating in research for money. There are almost no cases in which volunteer participation is driven by motivations other than money, because the attitudes between research participants and clinicians overwhelmingly tend to be characterized by mutual guarded distrust. This distrust is baked into the process, both on a cultural level among the participants, and by the clinics’ own incentives.

    All of what follows is drawn from my own experiences, and experiences that other participants in clinical pharmaceutical research have shared with me, because for reasons which should become clear over the course of this review, research which systematically explores the behaviors and motives of clinical research participants is generally not feasible to conduct.

    https://www.astralcodexten.com/p/your-review-participation-in-phase

    Show More Show Less
    25 mins
  • What Is Man, That Thou Art Mindful Of Him?
    Sep 10 2025

    "You made him lower than the angels for a short time..."

    God: …and the math results we’re seeing are nothing short of incredible. This Terry Tao guy -

    Iblis: Let me stop you right there. I agree humans can, in controlled situations, provide correct answers to math problems. I deny that they truly understand math. I had a conversation with one of the humans recently, which I’ll bring up here for the viewers … give me one moment …

    https://www.astralcodexten.com/p/what-is-man-that-thou-art-mindful

    Show More Show Less
    15 mins
  • Open Letter To The NIH
    Sep 2 2025
    You can sign the letter here. The Trump administration has been retaliating against its critics, and people and groups with business before the administration have started laundering criticism through other sources with less need for goodwill. So I have been asked to share an open letter, which needs signatures from scientists, doctors, and healthcare professionals. The authors tell me (THIS IS NOT THE CONTENTS OF THE LETTER, IT’S THEIR EXPLANATION, TO ME, OF WHAT THE LETTER IS FOR): The NIH has spent at least $5 billion less of that money than Congress has appropriated to them, which is bad because medical research is good and we want more of it. In May, NIH Director Jay Bhattacharya told a room full of people that he would spend all the money by the end of the fiscal year. That is good news, because any money not spent by that point will disappear. The bad news is the fiscal year ends on September 30th and according to the American Association of Medical Colleges, “the true shortfall far exceeds $5 billion.” Our open letter requests that Dr. Bhattacharya do what he said he would and spend all the money by September 30th. We as the originators of the letter do not want to be named publicly because we are concerned about being the focal point for blame and retaliation. We would rather be members of a large crowd of signatories than be singled out as individuals to make an example of. Based on our understanding of current administration norms, we do not expect retaliation against private individuals who sign this letter. We are looking for signatures from scientists, doctors, and healthcare professionals. So if that is you, please sign here. If you want to help support the letter more broadly, email nihfundingletter@gmail.com. Our stretch goal is to have a thousand people sign the letter within the next two weeks. To hammer home (since many people failed to understand it) that this is not the contents of the letter, I am including the actual contents below: We, the undersigned scientists, doctors, and public health stakeholders, commend your commitment to spend all funds allocated to the NIH, as reported in The Washington Post. At the same time, we are concerned by reports that U.S. institutions received nearly $5 billion less in NIH awards over the past year. With less than one month to the end of the fiscal year, we submit this urgent request to ensure that your commitment is upheld. If you anticipate that all appropriated funds cannot be spent in time, we request a public disclosure of the barriers preventing the achievement of this crucial responsibility. We present this request in the spirit of the broad, bipartisan consensus in favor of spending appropriated NIH funds. In their July letter to the Office of Management and Budget, fourteen Republican senators, led by Senators Collins, Britt, and McConnell, forcefully argued that suspension of NIH funds “could threaten Americans' ability to access better treatments and limit our nation's leadership in biomedical science.” The case for investment in medical research transcends political divides as it serves our collective national interest. The return on investment from research is compelling. Synthesizing the empirical literature, economist Matt Clancy estimates that each public and private R&D dollar yields roughly $5.50 in GDP—and about $11 when broader benefits are counted. Every dollar of NIH funding not deployed represents lost opportunities for breakthrough treatments, missed chances to train the next generation of scientists, and diminished returns on America's innovation ecosystem. Spending these funds is also a competitiveness imperative as China attempts to transform itself from a low-end manufacturer to a high-tech research and innovation juggernaut. In 2024, the Chinese government increased its spending on science and technology by 10%, and the nation’s total expenditure on research and development increased by 50% in nominal terms between 2020 and 2024. As China’s number of clinical trials and new drug candidates begin to outpace the U.S., America cannot afford to allow biomedical research funding to go unspent. We respectfully ask that you ensure that NIH will obligate all FY25 funds by September 30, 2025, and, if that is not possible, that you address the scientific community to explain why and what must be done to ensure all appropriated funds are spent in FY26. We stand ready to support your efforts to preserve this vital national investment. https://readscottalexander.com/posts/acx-open-letter-to-the-nih
    Show More Show Less
    5 mins
  • In Search Of AI Psychosis
    Sep 2 2025

    AI psychosis (NYT, PsychologyToday) is an apparent phenomenon where people go crazy after talking to chatbots too much. There are some high-profile anecdotes, but still many unanswered questions. For example, how common is it really? Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already? Isn’t psychosis supposed to be a biological disease? Wouldn’t that make chatbot-induced psychosis the same kind of category error as chatbot-induced diabetes?

    I don’t have all the answers, so think of this post as an exploration of possible analogies and precedents rather than a strongly-held thesis. Also, I might have one answer - I think the yearly incidence of AI psychosis is somewhere around 1 in 10,000 (for a loose definition) to 1 in 100,000 (for a strict definition). I’ll talk about how I got those numbers at the end. But first:

    I. Lenin Was A Mushroom

    https://www.astralcodexten.com/p/in-search-of-ai-psychosis

    Show More Show Less
    25 mins
  • Your Review: Ollantay
    Aug 24 2025

    Finalist #9 in the Review Contest

    [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]

    Ollantay is a three-act play written in Quechua, an indigenous language of the South American Andes. It was first performed in Peru around 1775. Since the mid-1800s it’s been performed more often, and nowadays it’s pretty easy to find some company in Peru doing it. If nothing else, it’s popular in Peruvian high schools as a way to get students to connect with Quechua history. It’s not a particularly long play; a full performance of Ollantay takes around an hour.1

    Also, nobody knows where Ollantay was written, when it was written, or who wrote it. And its first documented performance led directly to upwards of a hundred thousand deaths.

    Macbeth has killed at most fifty people,2 and yet it routinely tops listicles of “deadliest plays”. I’m here to propose that Ollantay take its place.

    https://www.astralcodexten.com/p/your-review-ollantay

    Show More Show Less
    32 mins
  • My Responses To Three Concerns From The Embryo Selection Post
    Aug 24 2025

    [original post here]

    #1: Isn’t it possible that embryos are alive, or have personhood, or are moral patients? Most IVF involves getting many embryos, then throwing out the ones that the couple doesn’t need to implant. If destroying embryos were wrong, then IVF would be unethical - and embryo selection, which might encourage more people to do IVF, or to maximize the number of embryos they get from IVF, would be extra unethical.

    I think a default position would be that if you believe humans are more valuable than cows, and cows more valuable than bugs - presumably because humans are more conscious/intelligent/complex/thoughtful/have more hopes and dreams/experience more emotions - then in that case embryos, which have less of a brain and nervous system even than bugs, should be less valuable still.

    One reason to abandon this default position would be if you believe in souls or some other nonphysical basis for personhood. Then maybe the soul would enter the embryo at conception. I think even here, it’s hard to figure out exactly what you’re saying - the soul clearly isn’t doing very much, in the sense of experiencing things, while it’s in the embryo. But it seems like God is probably pretty attached to souls, and maybe you don’t want to mess with them while He’s watching. In any case, all I can say is that this isn’t my metaphysics.

    But most people in the comments took a different tactic, arguing that we should give embryos special status (compared to cows and bugs) because they had the potential to grow into a person.

    https://www.astralcodexten.com/p/my-responses-to-three-concerns-from

    Show More Show Less
    19 mins