This episode explores the paper AI as Normal Technology by Arvind Narayanan and Sayash Kapoor, which challenges the idea of artificial intelligence as a superintelligent, transformative threat. Instead, the authors argue that AI should be understood as part of a long line of general-purpose technologies—more like electricity or the internet, less like an alien mind.Their core message is threefold: as description, prediction, and prescription. AI is currently a tool under human control, it will likely remain so, and we should approach its development through policies of resilience, not existential fear.Arvind Narayanan is a professor of computer science at Princeton University and director of the Center for Information Technology Policy. Sayash Kapoor is a Senior Fellow at Mozilla, a Laurance S. Rockefeller Fellow at the Princeton Center for Human Values, and a computer science PhD candidate at Princeton. Together they co-author AI Snake Oil, named one of Nature’s 10 best books of 2024, and a newsletter followed by 50,000 researchers, policymakers, journalists, and AI enthusiasts.This episode reflects on how their framing shifts the conversation away from utopian or dystopian extremes and toward the slower, more human work of integrating technologies into social, organisational, and political life.Companion notesKey ideas from Ep. X: AI as Normal TechnologyThis episode reflects on AI as Normal Technology by Arvind Narayanan and Sayash Kapoor, a paper arguing that AI should be seen as part of a long pattern of transformative but gradual technologies—not as an existential threat or superintelligent agent. Here are three key ideas that stand out:1. AI is a tool, not an alien intelligenceThe authors challenge the common framing of AI as a kind of autonomous mind.* Current AI systems are tools under human control, not independent agents.* Technological impact comes from how tools are used and integrated, not from some inherent “intelligence” inside the technology.* Predicting AI’s future as a runaway force overlooks how society, institutions, and policy shape technological outcomes.This framing invites us to ask who is using AI, how it is being used, and for what purposes—not just what the technology can do. It also reminds us that understanding the human side of AI systems—their users, contexts, and social effects—is as important as tracking technical performance.2. Progress will be gradual and messyThe speed of AI diffusion is shaped by more than technical capability.* Technological progress moves through invention, innovation, adoption, and diffusion—and each stage has its own pace.* Safety-critical domains like healthcare or criminal justice are slow by design, often constrained by regulation.* General benchmarks (like exam performance) tell us little about real-world impacts or readiness for professional tasks.This challenges the popular narrative of sudden, transformative change and helps temper predictions of mass automation or societal disruption. It also highlights the often-overlooked role of human, organisational, and cultural adaptation—the frictions, resistances, and recalibrations that shape how technologies actually land in the world.3. Focus on resilience, not speculative fearsThe paper argues for governance that centres on resilience, not control over hypothetical superintelligence.* Most risks—like accidents, misuse, or arms races—are familiar from past technologies and can be addressed with established tools.* Policies that improve adaptability, reduce uncertainty, and strengthen downstream safeguards matter more than model-level “alignment.”* Efforts to restrict or monopolise access to AI may paradoxically reduce resilience and harm safety innovation.This approach reframes AI policy as a governance challenge, not a science fiction problem and it implicitly points to the importance of understanding how humans and institutions build, maintain, and sometimes erode resilience over time.Narayanan and Kapoor’s work is a valuable provocation for anyone thinking about AI futures, policy, or ethics. It pushes the conversation back toward the social and political scaffolding around technology where, ultimately, its impacts are shaped.It’s a reminder that while much of the current conversation focuses on the capabilities and risks of the technology itself, we also need to pay attention to what’s happening on the human side: how people interpret, adopt, adapt to, and reshape these systems in practice.Always curious how others are thinking about resilience and governance.Until next time. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com