Artificial Thought Podcast

By: Elina Halonen
  • Summary

  • A behavioural science look at how AI changes the way we decide, act, and make sense of the world.

    artificialthought.substack.com
    Elina Halonen
    Show More Show Less
activate_mytile_page_redirect_t1
Episodes
  • Ep. 10: AI as Normal Technology
    May 1 2025
    This episode explores the paper AI as Normal Technology by Arvind Narayanan and Sayash Kapoor, which challenges the idea of artificial intelligence as a superintelligent, transformative threat. Instead, the authors argue that AI should be understood as part of a long line of general-purpose technologies—more like electricity or the internet, less like an alien mind.Their core message is threefold: as description, prediction, and prescription. AI is currently a tool under human control, it will likely remain so, and we should approach its development through policies of resilience, not existential fear.Arvind Narayanan is a professor of computer science at Princeton University and director of the Center for Information Technology Policy. Sayash Kapoor is a Senior Fellow at Mozilla, a Laurance S. Rockefeller Fellow at the Princeton Center for Human Values, and a computer science PhD candidate at Princeton. Together they co-author AI Snake Oil, named one of Nature’s 10 best books of 2024, and a newsletter followed by 50,000 researchers, policymakers, journalists, and AI enthusiasts.This episode reflects on how their framing shifts the conversation away from utopian or dystopian extremes and toward the slower, more human work of integrating technologies into social, organisational, and political life.Companion notesKey ideas from Ep. X: AI as Normal TechnologyThis episode reflects on AI as Normal Technology by Arvind Narayanan and Sayash Kapoor, a paper arguing that AI should be seen as part of a long pattern of transformative but gradual technologies—not as an existential threat or superintelligent agent. Here are three key ideas that stand out:1. AI is a tool, not an alien intelligenceThe authors challenge the common framing of AI as a kind of autonomous mind.* Current AI systems are tools under human control, not independent agents.* Technological impact comes from how tools are used and integrated, not from some inherent “intelligence” inside the technology.* Predicting AI’s future as a runaway force overlooks how society, institutions, and policy shape technological outcomes.This framing invites us to ask who is using AI, how it is being used, and for what purposes—not just what the technology can do. It also reminds us that understanding the human side of AI systems—their users, contexts, and social effects—is as important as tracking technical performance.2. Progress will be gradual and messyThe speed of AI diffusion is shaped by more than technical capability.* Technological progress moves through invention, innovation, adoption, and diffusion—and each stage has its own pace.* Safety-critical domains like healthcare or criminal justice are slow by design, often constrained by regulation.* General benchmarks (like exam performance) tell us little about real-world impacts or readiness for professional tasks.This challenges the popular narrative of sudden, transformative change and helps temper predictions of mass automation or societal disruption. It also highlights the often-overlooked role of human, organisational, and cultural adaptation—the frictions, resistances, and recalibrations that shape how technologies actually land in the world.3. Focus on resilience, not speculative fearsThe paper argues for governance that centres on resilience, not control over hypothetical superintelligence.* Most risks—like accidents, misuse, or arms races—are familiar from past technologies and can be addressed with established tools.* Policies that improve adaptability, reduce uncertainty, and strengthen downstream safeguards matter more than model-level “alignment.”* Efforts to restrict or monopolise access to AI may paradoxically reduce resilience and harm safety innovation.This approach reframes AI policy as a governance challenge, not a science fiction problem and it implicitly points to the importance of understanding how humans and institutions build, maintain, and sometimes erode resilience over time.Narayanan and Kapoor’s work is a valuable provocation for anyone thinking about AI futures, policy, or ethics. It pushes the conversation back toward the social and political scaffolding around technology where, ultimately, its impacts are shaped.It’s a reminder that while much of the current conversation focuses on the capabilities and risks of the technology itself, we also need to pay attention to what’s happening on the human side: how people interpret, adopt, adapt to, and reshape these systems in practice.Always curious how others are thinking about resilience and governance.Until next time. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Show More Show Less
    29 mins
  • Ep. 9: The Bias Loop
    Apr 27 2025

    This episode reflects on a 2024 Nature Human Behaviour article by Moshe Glickman and Tali Sharot, which investigates how interacting with AI systems can subtly alter human perception, emotion, and social judgement. Their research shows that when humans interact with even slightly biased AI, their own biases increase over time—and more so than when interacting with other people.

    This creates a feedback loop: humans train AI, and AI reshapes how humans see the world. The paper highlights a dynamic that often goes unnoticed in AI ethics or UX design conversations—how passive, everyday use of AI systems can gradually reinforce distorted norms of judgement.

    These reflections are especially relevant for AI developers, behavioural researchers, and policymakers thinking about how systems influence belief, bias, and social cognition over time.

    Source: Glickman, M., Sharot, T. How human–AI feedback loops alter human perceptual, emotional and social judgements. Nat Hum Behav 9, 345–359 (2025). https://doi.org/10.1038/s41562-024-02077-2

    Key ideas from Ep. 9: The Bias Loop

    This episode reflects on a 2024 article in Nature Human Behaviour by Moshe Glickman and Tali Sharot, which explores how human–AI interactions create feedback loops that amplify human biases. The core finding: slightly biased AI doesn’t just reflect human judgement—it magnifies it. And when humans repeatedly engage with these systems, they often adopt those amplified biases as their own.

    Here are three things worth paying attention to:

    1. AI doesn't just mirror—it intensifies

    Interacting with AI can shift our perceptions more than interacting with people.

    * AI systems trained on slightly biased data tended to exaggerate that bias.

    * When people then used those systems, their own bias increased—sometimes substantially.

    * This happened across domains: perceptual tasks (e.g. emotion recognition), social categorisation, and even real-world image generation (e.g. AI-generated images of “financial managers”).

    Unlike human feedback, AI judgements feel consistent, precise, and authoritative—making them more persuasive, even when wrong.

    2. People underestimate AI’s influence

    Participants thought they were being more influenced by accurate AI—but biased AI shaped their thinking just as much.

    * Most participants didn’t realise how much the biased AI was nudging them.

    * Feedback labelled as coming from “AI” had a stronger influence than when labelled as “human,” even when the content was identical.

    * This suggests that perceived objectivity enhances influence—even when the output is flawed.

    Subtle framing cues (like labelling) matter more than we assume in shaping trust and uptake.

    3. Feedback loops are a design risk—and an opportunity

    Bias can accumulate over time. But so can accuracy.

    * Repeated exposure to biased AI increases human bias. But repeated exposure to accurate AI improved human judgement.

    * Small changes in training data, system defaults, or how outputs are framed can shift trajectories over time.

    * That means AI systems don’t just transmit information. They shape norms of perception and evaluation.

    Design choices that reduce error or clarify uncertainty won’t just improve individual outputs—they could reduce cumulative bias at scale.

    The study’s findings offer a clear behavioural mechanism for something often discussed in theory: how AI systems can influence society indirectly, through micro-shifts in user cognition. For developers, that means accounting not just for output accuracy, but for how people change through use. For behavioural scientists, it raises questions about how norms are formed in system-mediated environments. And for policy, it adds weight to the argument that user-facing AI isn’t just a content issue—it’s a cognitive one.

    Always curious how others are approaching these design risks. Until next time.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Show More Show Less
    20 mins
  • Ep. 8: Understanding the AI Index 2025
    Apr 22 2025
    In this episode of Artificial Thought, we unpack the 2025 AI Index report—one of the most comprehensive snapshots of where the field sees itself heading. From benchmark breakthroughs to global regulation, the report tracks acceleration in every direction. But what does that tell us about the systems we’re building—and what does it leave out?We’ll walk through some of the major findings, raise a few behavioural questions, and reflect on what’s really being measured when progress moves this fast.This episode sets the stage for a deeper conversation in the next post, where we explore the blind spots in AI evaluation—and why behavioural insight might be key to filling them.A closer look at the AI Index 2025The AI Index is one of the most influential annual snapshots of how artificial intelligence is evolving—tracking everything from technical performance and economic trends to regulation, education, and public opinion.Produced by the Stanford Institute for Human-Centered AI, it’s the most data-rich public record we have of where the field sees itself going, but with hundreds of charts, complex benchmarks, and deep dives across disciplines, it’s not exactly designed for a quick read.Still, understanding where the field is heading matters—especially for those of us thinking about the behavioural consequences of system design. This guide offers a high-level overview of the major findings, with just enough context to orient you and just enough interpretation to make you pause. It pairs with the podcast episode, which walks through some of the trends conversationally. A deeper companion essay is coming soon.Technical leaps are happening fast, but unevenlyAI systems are getting dramatically better at coding, answering expert-level questions, and understanding complex inputs like charts and diagrams. One coding benchmark, SWE-bench, saw models jump from solving 4% of problems to over 70% in a single year. That’s a massive leap in software automation.At the same time, models are struggling with tasks that involve deep reasoning, logical planning, or understanding cause and effect. So while AI can now mimic expertise in many areas, it still stumbles on the kind of reasoning that requires steps, context, and reflection.The report also notes that smaller, more efficient models are catching up fast to their larger, more resource-intensive predecessors. This suggests AI might soon become more accessible—embedded in everyday tools, not just the big platforms.AI is spreading through business but impact is still limitedCorporate investment in AI reached $252.3 billion in 2024, a 26% increase from the previous year. Adoption is rising too: 78% of organisations say they’re using AI in some form, and use of generative AI more than doubled in just one year.But most companies report modest results so far. Less than 10% say they’re seeing major cost savings or new revenue directly from AI. In other words: the hype is real, but the business value is still emerging—and often depends on how well the tools are integrated into human workflows.Global optimism is rising but opinions are polarisedMore people believe AI will change their daily lives in the next three to five years—and optimism about AI’s potential is growing in countries that were previously more cautious, like Germany and Canada.But there’s a clear regional divide:* Countries like China, Indonesia, and Mexico report high excitement and trust.* The US, UK, and Canada are more sceptical—especially about fairness, privacy, and the intentions of AI companies.Trust in the ethical conduct of AI developers is declining globally, even as enthusiasm grows. This gap—between excitement and concern—is a key behavioural dynamic to watch.Responsible AI is talked about more than it’s practicedAlthough companies are increasingly aware of the risks—like bias, misinformation, and security—few are taking consistent steps to mitigate them.Barriers include:* A lack of training or understanding among teams* Unclear regulations* Competing priorities, like speed to marketEven in healthcare, where stakes are high, studies show that adding AI doesn’t always improve outcomes unless the workflows and roles are carefully redesigned.Collaboration is assumed but not guaranteedMany parts of the report reflect an implicit belief that AI will support, not replace, human roles. Research shows that in many settings, AI complements human work—handling detail or recall while people focus on judgement.But this collaboration only works well when the system fits the task and the user understands how to work with it. Some studies show that using AI tools alongside traditional resources doesn’t always improve results. The interaction needs to be well-designed.AI is deeply embedded in science, medicine, and educationFrom helping win Nobel prizes to predicting protein structures and assisting in climate modelling, AI is becoming a core part of scientific ...
    Show More Show Less
    20 mins

What listeners say about Artificial Thought Podcast

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.