• Ep. 10: AI as Normal Technology
    May 1 2025
    This episode explores the paper AI as Normal Technology by Arvind Narayanan and Sayash Kapoor, which challenges the idea of artificial intelligence as a superintelligent, transformative threat. Instead, the authors argue that AI should be understood as part of a long line of general-purpose technologies—more like electricity or the internet, less like an alien mind.Their core message is threefold: as description, prediction, and prescription. AI is currently a tool under human control, it will likely remain so, and we should approach its development through policies of resilience, not existential fear.Arvind Narayanan is a professor of computer science at Princeton University and director of the Center for Information Technology Policy. Sayash Kapoor is a Senior Fellow at Mozilla, a Laurance S. Rockefeller Fellow at the Princeton Center for Human Values, and a computer science PhD candidate at Princeton. Together they co-author AI Snake Oil, named one of Nature’s 10 best books of 2024, and a newsletter followed by 50,000 researchers, policymakers, journalists, and AI enthusiasts.This episode reflects on how their framing shifts the conversation away from utopian or dystopian extremes and toward the slower, more human work of integrating technologies into social, organisational, and political life.Companion notesKey ideas from Ep. X: AI as Normal TechnologyThis episode reflects on AI as Normal Technology by Arvind Narayanan and Sayash Kapoor, a paper arguing that AI should be seen as part of a long pattern of transformative but gradual technologies—not as an existential threat or superintelligent agent. Here are three key ideas that stand out:1. AI is a tool, not an alien intelligenceThe authors challenge the common framing of AI as a kind of autonomous mind.* Current AI systems are tools under human control, not independent agents.* Technological impact comes from how tools are used and integrated, not from some inherent “intelligence” inside the technology.* Predicting AI’s future as a runaway force overlooks how society, institutions, and policy shape technological outcomes.This framing invites us to ask who is using AI, how it is being used, and for what purposes—not just what the technology can do. It also reminds us that understanding the human side of AI systems—their users, contexts, and social effects—is as important as tracking technical performance.2. Progress will be gradual and messyThe speed of AI diffusion is shaped by more than technical capability.* Technological progress moves through invention, innovation, adoption, and diffusion—and each stage has its own pace.* Safety-critical domains like healthcare or criminal justice are slow by design, often constrained by regulation.* General benchmarks (like exam performance) tell us little about real-world impacts or readiness for professional tasks.This challenges the popular narrative of sudden, transformative change and helps temper predictions of mass automation or societal disruption. It also highlights the often-overlooked role of human, organisational, and cultural adaptation—the frictions, resistances, and recalibrations that shape how technologies actually land in the world.3. Focus on resilience, not speculative fearsThe paper argues for governance that centres on resilience, not control over hypothetical superintelligence.* Most risks—like accidents, misuse, or arms races—are familiar from past technologies and can be addressed with established tools.* Policies that improve adaptability, reduce uncertainty, and strengthen downstream safeguards matter more than model-level “alignment.”* Efforts to restrict or monopolise access to AI may paradoxically reduce resilience and harm safety innovation.This approach reframes AI policy as a governance challenge, not a science fiction problem and it implicitly points to the importance of understanding how humans and institutions build, maintain, and sometimes erode resilience over time.Narayanan and Kapoor’s work is a valuable provocation for anyone thinking about AI futures, policy, or ethics. It pushes the conversation back toward the social and political scaffolding around technology where, ultimately, its impacts are shaped.It’s a reminder that while much of the current conversation focuses on the capabilities and risks of the technology itself, we also need to pay attention to what’s happening on the human side: how people interpret, adopt, adapt to, and reshape these systems in practice.Always curious how others are thinking about resilience and governance.Until next time. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Show More Show Less
    29 mins
  • Ep. 9: The Bias Loop
    Apr 27 2025

    This episode reflects on a 2024 Nature Human Behaviour article by Moshe Glickman and Tali Sharot, which investigates how interacting with AI systems can subtly alter human perception, emotion, and social judgement. Their research shows that when humans interact with even slightly biased AI, their own biases increase over time—and more so than when interacting with other people.

    This creates a feedback loop: humans train AI, and AI reshapes how humans see the world. The paper highlights a dynamic that often goes unnoticed in AI ethics or UX design conversations—how passive, everyday use of AI systems can gradually reinforce distorted norms of judgement.

    These reflections are especially relevant for AI developers, behavioural researchers, and policymakers thinking about how systems influence belief, bias, and social cognition over time.

    Source: Glickman, M., Sharot, T. How human–AI feedback loops alter human perceptual, emotional and social judgements. Nat Hum Behav 9, 345–359 (2025). https://doi.org/10.1038/s41562-024-02077-2

    Key ideas from Ep. 9: The Bias Loop

    This episode reflects on a 2024 article in Nature Human Behaviour by Moshe Glickman and Tali Sharot, which explores how human–AI interactions create feedback loops that amplify human biases. The core finding: slightly biased AI doesn’t just reflect human judgement—it magnifies it. And when humans repeatedly engage with these systems, they often adopt those amplified biases as their own.

    Here are three things worth paying attention to:

    1. AI doesn't just mirror—it intensifies

    Interacting with AI can shift our perceptions more than interacting with people.

    * AI systems trained on slightly biased data tended to exaggerate that bias.

    * When people then used those systems, their own bias increased—sometimes substantially.

    * This happened across domains: perceptual tasks (e.g. emotion recognition), social categorisation, and even real-world image generation (e.g. AI-generated images of “financial managers”).

    Unlike human feedback, AI judgements feel consistent, precise, and authoritative—making them more persuasive, even when wrong.

    2. People underestimate AI’s influence

    Participants thought they were being more influenced by accurate AI—but biased AI shaped their thinking just as much.

    * Most participants didn’t realise how much the biased AI was nudging them.

    * Feedback labelled as coming from “AI” had a stronger influence than when labelled as “human,” even when the content was identical.

    * This suggests that perceived objectivity enhances influence—even when the output is flawed.

    Subtle framing cues (like labelling) matter more than we assume in shaping trust and uptake.

    3. Feedback loops are a design risk—and an opportunity

    Bias can accumulate over time. But so can accuracy.

    * Repeated exposure to biased AI increases human bias. But repeated exposure to accurate AI improved human judgement.

    * Small changes in training data, system defaults, or how outputs are framed can shift trajectories over time.

    * That means AI systems don’t just transmit information. They shape norms of perception and evaluation.

    Design choices that reduce error or clarify uncertainty won’t just improve individual outputs—they could reduce cumulative bias at scale.

    The study’s findings offer a clear behavioural mechanism for something often discussed in theory: how AI systems can influence society indirectly, through micro-shifts in user cognition. For developers, that means accounting not just for output accuracy, but for how people change through use. For behavioural scientists, it raises questions about how norms are formed in system-mediated environments. And for policy, it adds weight to the argument that user-facing AI isn’t just a content issue—it’s a cognitive one.

    Always curious how others are approaching these design risks. Until next time.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Show More Show Less
    20 mins
  • Ep. 8: Understanding the AI Index 2025
    Apr 22 2025
    In this episode of Artificial Thought, we unpack the 2025 AI Index report—one of the most comprehensive snapshots of where the field sees itself heading. From benchmark breakthroughs to global regulation, the report tracks acceleration in every direction. But what does that tell us about the systems we’re building—and what does it leave out?We’ll walk through some of the major findings, raise a few behavioural questions, and reflect on what’s really being measured when progress moves this fast.This episode sets the stage for a deeper conversation in the next post, where we explore the blind spots in AI evaluation—and why behavioural insight might be key to filling them.A closer look at the AI Index 2025The AI Index is one of the most influential annual snapshots of how artificial intelligence is evolving—tracking everything from technical performance and economic trends to regulation, education, and public opinion.Produced by the Stanford Institute for Human-Centered AI, it’s the most data-rich public record we have of where the field sees itself going, but with hundreds of charts, complex benchmarks, and deep dives across disciplines, it’s not exactly designed for a quick read.Still, understanding where the field is heading matters—especially for those of us thinking about the behavioural consequences of system design. This guide offers a high-level overview of the major findings, with just enough context to orient you and just enough interpretation to make you pause. It pairs with the podcast episode, which walks through some of the trends conversationally. A deeper companion essay is coming soon.Technical leaps are happening fast, but unevenlyAI systems are getting dramatically better at coding, answering expert-level questions, and understanding complex inputs like charts and diagrams. One coding benchmark, SWE-bench, saw models jump from solving 4% of problems to over 70% in a single year. That’s a massive leap in software automation.At the same time, models are struggling with tasks that involve deep reasoning, logical planning, or understanding cause and effect. So while AI can now mimic expertise in many areas, it still stumbles on the kind of reasoning that requires steps, context, and reflection.The report also notes that smaller, more efficient models are catching up fast to their larger, more resource-intensive predecessors. This suggests AI might soon become more accessible—embedded in everyday tools, not just the big platforms.AI is spreading through business but impact is still limitedCorporate investment in AI reached $252.3 billion in 2024, a 26% increase from the previous year. Adoption is rising too: 78% of organisations say they’re using AI in some form, and use of generative AI more than doubled in just one year.But most companies report modest results so far. Less than 10% say they’re seeing major cost savings or new revenue directly from AI. In other words: the hype is real, but the business value is still emerging—and often depends on how well the tools are integrated into human workflows.Global optimism is rising but opinions are polarisedMore people believe AI will change their daily lives in the next three to five years—and optimism about AI’s potential is growing in countries that were previously more cautious, like Germany and Canada.But there’s a clear regional divide:* Countries like China, Indonesia, and Mexico report high excitement and trust.* The US, UK, and Canada are more sceptical—especially about fairness, privacy, and the intentions of AI companies.Trust in the ethical conduct of AI developers is declining globally, even as enthusiasm grows. This gap—between excitement and concern—is a key behavioural dynamic to watch.Responsible AI is talked about more than it’s practicedAlthough companies are increasingly aware of the risks—like bias, misinformation, and security—few are taking consistent steps to mitigate them.Barriers include:* A lack of training or understanding among teams* Unclear regulations* Competing priorities, like speed to marketEven in healthcare, where stakes are high, studies show that adding AI doesn’t always improve outcomes unless the workflows and roles are carefully redesigned.Collaboration is assumed but not guaranteedMany parts of the report reflect an implicit belief that AI will support, not replace, human roles. Research shows that in many settings, AI complements human work—handling detail or recall while people focus on judgement.But this collaboration only works well when the system fits the task and the user understands how to work with it. Some studies show that using AI tools alongside traditional resources doesn’t always improve results. The interaction needs to be well-designed.AI is deeply embedded in science, medicine, and educationFrom helping win Nobel prizes to predicting protein structures and assisting in climate modelling, AI is becoming a core part of scientific ...
    Show More Show Less
    20 mins
  • Ep. 7: Co-intelligence and the shape of collaboration
    Apr 20 2025
    This episode offers a short overview of Co-Intelligence: Living and Working with AI by Ethan Mollick, a book that argues for a shift in how we relate to artificial intelligence—not as a tool to control or replace us, but as a “co-intelligence” to collaborate with.Mollick suggests that large language models represent a new kind of thinking system: statistically trained, behaviourally unpredictable, and full of potential when paired with human judgement. The book explores how AI is reshaping creativity, education, decision-making, and the boundaries of human work—and why getting to know these systems deeply may be the only way to use them well. These reflections explore what it means to collaborate with something that predicts rather than understands, and how to navigate a system that offers insight, but not intent.Mollick, a professor at the Wharton School of the University of Pennsylvania, is known for making AI accessible to a wide audience through research-informed teaching and practical writing. This book is part of that ongoing effort of translating technical developments into human terms, without losing sight of their complexity.He is one of the most accessible voices writing about generative AI today, and Co-Intelligence is a must-read as an entry point for thinking through the trade-offs involved in working with these systems. It’s not a technical book, but a strategic one—written from the position of a curious user trying to understand what we’ve built, and how to live with it. It’s also well worth it to follow him on LinkedIn, and here on Substack if you’re not doing so already!Key ideas from Ep. 7: Co-Intelligence and the Shape of CollaborationThis episode draws on Ethan Mollick’s Co-Intelligence, a book about how artificial intelligence—especially large language models—is reshaping how we think, work, and create. Mollick’s core argument is that AI is neither a tool to be blindly automated nor a mind to be trusted. It is something in between: a strange, alien co-intelligence that becomes useful when paired with human context, scepticism, and purpose.Here are a few key themes that stood out for me:1. Thinking with prediction, not intentionLLMs behave like minds, but they’re built to predict—not understand.* These systems work by forecasting the next likely word or token based on statistical training, not internal logic or intention.* Users often experience them as responsive or intelligent, but their outputs are shaped by pattern probability, not comprehension.* This creates a unique kind of collaboration: helpful, high-variance, and sometimes startling—but never grounded in awareness.Understanding this core mechanism matters. It helps explain both the creativity and the unreliability.2. Human–AI collaboration is messy and unevenThe future isn’t full automation—it’s learning how to work with jagged, unpredictable systems.* Mollick introduces the idea of the Jagged Frontier: some tasks AI handles impressively; others, just adjacent, remain out of reach.* Effective collaboration involves experimentation. Users have to try, observe, and adjust—there’s no universal guide.* The most successful users are often user innovators—those who explore creatively, adapt prompts, and learn from breakdowns.Treating AI as a fixed tool misses the point. It’s better understood as a cognitive partner with blind spots.3. Co-intelligence depends on human framingWhat makes AI valuable isn’t what it knows—it’s what you bring to the interaction.* LLMs can offer new framings, challenge assumptions, and break status quo bias—but only when prompted carefully.* Mollick emphasises the importance of defining personas and constraints to guide the system’s output.* AI can generate novelty, but it’s the human who selects, shapes, and applies it. That’s where the real intelligence lives.The value of co-intelligence lies not in automation, but in alignment—with human goals, context, and curiosity.Always curious how others are making sense of these tools - until next time. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Show More Show Less
    16 mins
  • Ep. 6: When AI Feels Human
    Apr 18 2025
    This episode offers a short overview of the article “When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design”(Maeda & Quan-Haase, 2024) that explores how conversational AI systems, designed with human-like features, foster parasocial relationships through language, affective cues, and simulated reciprocity. These design choices invite users to assign social roles to chatbots, project meaning into partial responses, and develop a kind of trust that feels relational but is entirely one-sided. This form of "parasocial trust" can deepen engagement and usability, but it also raises new ethical questions—especially when role-play begins to displace critical judgement or blur the boundary between simulated and social understanding.This episode explores the phenomenon of parasocial trust in human–AI interaction, where chatbots designed to mimic human conversation begin to feel socially present. Through natural language, simulated care, and rhetorical cues, these systems invite users to assign them roles, fill in missing context, and interpret responses as if they were part of a reciprocal relationship. That trust (affective rather than cognitive) can deepen engagement, nut it also carries risks, especially when design features create the illusion of mutuality where none exists.The reflections in this episode are based on a paper presented at ACM FAccT 2024, an interdisciplinary conference on fairness, accountability, and transparency in AI. The conference is hosted by the Association for Computing Machinery (ACM), the world’s largest scientific and educational computing society. Submissions to FAccT are peer-reviewed and held to journal-level standards, drawing contributions from computer science, law, social science, and the humanities.Key ideas from Ep. 6: When AI Feels HumanThe key theme of this episode is how chatbots can start to feel socially present despite offering no real reciprocity, and how design choices like natural language, simulated care, and rhetorical warmth help cultivate a one-sided but affectively strong connection that feels intuitive, even relational. Here are a few ideas that stood out:1. Anthropomorphic cues make projection easyChatbots don't just answer questions—they mirror human conversation patterns.* Small phrases like “how can I help?” or “I understand” signal care, even if no understanding exists* Turn-taking, affirmations, and apologies simulate mutual engagement* Users often respond by assigning the chatbot a social role—assistant, mentor, therapistThis feels intuitive, but it isn’t neutral. The interface subtly invites projection.2. Trust can be affective—even when it shouldn’t beThe trust users place in chatbots isn’t always about accuracy or performance.* “Parasocial trust” develops through the feeling of being heard or helped* This kind of trust doesn’t require competence—it’s built on cues of warmth and responsiveness* That’s what makes it sticky, and sometimes hard to interrogateIt’s easier to trust what feels familiar than to evaluate what’s actually true. 3. Ambiguity fuels role assignmentWhen systems don’t clearly explain themselves, users fill in the blanks.* Design ambiguity leaves space for users to imagine intention or expertise* The more convincing the persona, the more likely users are to misattribute judgement or care* Over time, these interactions can reshape how people think about where support or validation comes fromThis isn’t just a UX issue. It’s a behavioural one—because it changes how people interpret the interaction.These dynamics aren’t inherently harmful. But they can be ethically slippery, especially when chatbots are used in contexts like education, wellbeing, or personal advice. One thing that stood out in the paper is how quickly trust builds—and how little users are asked to question it.If you're working with systems that simulate social presence, or even just using them regularly, it's worth paying attention to how quickly functional responses become relational ones. What roles are you projecting? And what’s being displaced in the process?If this resonates—or doesn’t—I’d welcome your reflections. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Show More Show Less
    12 mins
  • Ep. 5: The AI Mirror and the Trouble with Reflection
    Apr 16 2025
    This episode draws on The AI Mirror and related sources to examine a quiet but far-reaching danger: not that AI will surpass us, but that it reflects us too well. Framed as a “mirror,” AI doesn’t invent: it extracts and amplifies the values, patterns, and flaws already embedded in the data it’s trained on. What we see in its outputs may feel familiar, even insightful, but the danger is that this familiarity can distort rather than clarify.From moral deskilling to the erosion of imagination, the episode explores how over-reliance on AI mirrors risks weakening the very capacities we need to steer technology wisely. If these systems project the past into the future, how do we cultivate moral growth, creative vision, or the courage to change?The idea of AI as pathology—not because it is malicious, but because it magnifies what is already broken—runs through Shannon Vallor’s work. This episode asks what it would mean to reclaim our role as agents, not just reflectants, in a system increasingly built to automate our judgement.Source: Vallor, Shannon (2024) The AI Mirror: How to Reclaim Our Humanity in an Age of Machine ThinkingA longer companion to Ep. 5 of Artificial Thought PodcastIf we accept that AI functions as a mirror—reflecting, amplifying, and sometimes distorting—then the question shifts from what it shows to how that act of showing reshapes us. Reflection invites interpretation. It draws us into a process of response, sometimes imitation. And when systems designed to reflect patterns begin to shape how we form ideas, choose directions, or assess relevance, their influence extends beyond interpretation into the construction of thought itself.Shannon Vallor explores this process in The AI Mirror, where she traces how tools that simulate understanding can begin to affect our capacity to understand ourselves. Her concern lies in the kinds of habits these systems cultivate: ways of thinking that become less exploratory, less situated in ambiguity, and more aligned with previously recorded outcomes. These aren’t hypothetical risks—they surface in how judgement adapts to interfaces that reduce uncertainty in advance by filtering options, predicting needs, or offering guidance without prompting.That predictive smoothness can feel useful, even necessary, in complex systems but it also alters how discomfort and uncertainty are processed. In many contexts, those moments of ambiguity have served an important function. They have offered the conditions under which values are examined, choices are weighed, and dissent becomes visible. When those conditions are gradually displaced by systems that resolve uncertainty on our behalf, the practice of reflection begins to contract.This is where the idea of generative friction becomes relevant. The systems most people interact with today are not hostile to human judgement, but they often work around it. They create environments in which thinking still happens, but with less resistance and fewer invitations to pause. Over time, this shifts the locus of effort away from sense-making and toward review. When reflection is no longer required, the space in which it might occur begins to fade.Vallor refers to this process as a kind of moral deskilling—not as a sharp decline, but as a gradual change in what gets exercised and what does not. When outputs appear reliable and consistent, even without explanation, the habit of seeking deeper coherence may become less frequent. The work of identifying context, resolving conflict, or articulating reasons takes time. When that time is no longer structurally supported, the behaviours it once enabled may begin to recede.This process is often subtle. It includes what Vallor describes as reverse adaptation—where human behaviour shifts to better fit the assumptions of the tools used to shape it. These shifts can be seen in settings where system compatibility is rewarded: workplaces where efficiency metrics structure effort, classrooms where emotion is translated into performance data, interfaces that encourage alignment with predicted preferences. In each case, tools that were introduced to support decision-making begin to influence how decisions are framed.Even so, Vallor doesn’t frame these developments as inevitable. There are paths through which AI could augment rather than compress human capacities. Doing so requires a different kind of design orientation, one that foregrounds not only capability but conditions that support judgement, allow for disagreement, and sustain attention across moments of uncertainty. These are behavioural questions as much as technical or ethical ones, and they involve rethinking what environments are being optimised for, and what capacities they encourage us to develop.The podcast episode outlines Vallor’s central argument. This reflection builds on that overview by considering how friction, effort, and moral discernment are shaped by the tools we use. When patterns are ...
    Show More Show Less
    21 mins
  • Ep. 4: Why AI Adoption Isn’t an AI Problem
    Apr 14 2025
    Most AI strategy reports focus on models, metrics, and monetisation and McKinsey’s State of AI 2024 is no exception, but between the charts and case studies, it quietly surfaces something deeper: adopting AI means redesigning workflows. Redesigning workflows means reshaping human behaviour and that’s where behavioural scientists can contribute.The podcast episode is a quick overview of the report’s findings and this companion post makes the behavioural case for why it matters. If you work in behaviour change, systems thinking, organisational design, or decision science, this is your signal: AI adoption is a human transformation just as much it’s a technical one. Most AI strategy reports focus on models, performance metrics, or return on investment, and McKinsey's State of AI 2024 report follows this familiar template. However, a closer reading through a behavioural lens suggests that the core challenges organisations face are not primarily technical. They relate to how people work, adapt, and respond within changing systems.One of the report's clearest findings is that companies seeing the greatest value from AI are not necessarily those deploying the most advanced tools. Instead, they are the ones that have made deliberate changes to how workflows are structured. This suggests that the effort required to benefit from AI lies less in model sophistication and more in the design of day-to-day processes. From a behavioural perspective, that involves a set of tasks that go beyond the scope of engineering or data science.This post accompanies a recent podcast episode summarising the report. It aims to situate that summary within a broader reflection on how behavioural science could play a more central role in AI adoption. The uptake of generative tools is often framed in terms of technical integration. Yet the work of integration is largely human. It involves shifts in habits, assumptions, and informal decision rules—areas where behavioural science has long experience.Workflow redesign signals a behavioural shiftAmong the report's findings, one in particular stands out: workflow redesign is the only organisational factor that consistently predicts positive economic outcomes from AI adoption. Despite this, relatively few companies report having changed how work is actually done. Most have added tools into existing systems without rethinking the underlying tasks or responsibilities.From a behavioural science point of view, this is a familiar pattern. New systems are introduced with the expectation that they will change behaviour, yet little attention is paid to the conditions that shape how people actually behave. If workflows are not adapted to account for changes in decision dynamics, information flow, or accountability, the effects of new tools may remain limited.The idea of workflow redesign can therefore be seen not just as an operational measure but as a form of behavioural intervention. It draws attention to the routines, defaults, and incentives that guide behaviour. In the context of generative AI, this includes how outputs are evaluated, how iteration is managed, and how responsibility is distributed.Generative tools reconfigure where effort is requiredOne common framing of generative AI is that it reduces friction in creative or cognitive tasks. Tasks that once required considerable time or skill can now be completed more quickly. However, this reduction in effort does not necessarily translate into overall ease. Instead, the point at which effort is required may shift.The report notes variation in how organisations handle the review of AI-generated outputs. In some cases, outputs are consistently checked; in others, they are used with minimal oversight. This inconsistency points to a broader ambiguity around quality assurance, decision authority, and accountability. These are not technical questions. They are questions about how work is defined and how decisions are distributed.When generation is easy, evaluation often becomes harder. This is particularly true in contexts where outputs are numerous and where each appears superficially plausible. The work of discerning what is appropriate or trustworthy becomes a new form of labour—one that is often under-specified in organisational processes.Behavioural science contributes to system designThe implementation of generative systems often focuses on surface-level functionality. Tools are assessed based on speed, usability, or output quality. Yet many of the problems that emerge after deployment relate to how those tools are used in context. Behavioural science offers methods for examining that context, identifying where friction occurs, and designing environments that better support decision-making.For instance, new workflows often generate cognitive demands that are not visible in formal process maps. These include repeated judgement calls, switching between exploratory and evaluative modes, and managing ambiguity about completion. ...
    Show More Show Less
    14 mins
  • Ep. 3: AI Narratives Across Cultures
    Apr 13 2025

    What does AI mean in different parts of the world? This episode draws on Imagining AI: How the World Sees Intelligent Machines, an interdisciplinary collection edited by Stephen Cave and Kanta Dihal, to explore how artificial intelligence is understood, feared, hoped for, and narrated across cultures.

    From dystopian Italian comics to Chilean cyberpunk, from Indigenous ontologies in Nigeria to social robots in Japan, the book reveals that “AI” isn’t a universal concept—it’s shaped by language, history, power, and politics. These cultural imaginaries don’t just influence science fiction; they affect how AI is built, governed, and adopted.

    We unpack how different societies frame intelligence, agency, and the boundaries between human and machine. And we ask what it would mean to decentre Western narratives—to move from exporting AI ethics to co-creating them.

    This episode is part of Artificial Thought, a series exploring how behavioural science and cultural context can help us rethink the systems we call intelligent.

    Cave, S., & Dihal, K. (Eds.). (2023). Imagining AI: how the world sees intelligent machines. Oxford University Press.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Show More Show Less
    24 mins