This episode reflects on a 2024 Nature Human Behaviour article by Moshe Glickman and Tali Sharot, which investigates how interacting with AI systems can subtly alter human perception, emotion, and social judgement. Their research shows that when humans interact with even slightly biased AI, their own biases increase over time—and more so than when interacting with other people.
This creates a feedback loop: humans train AI, and AI reshapes how humans see the world. The paper highlights a dynamic that often goes unnoticed in AI ethics or UX design conversations—how passive, everyday use of AI systems can gradually reinforce distorted norms of judgement.
These reflections are especially relevant for AI developers, behavioural researchers, and policymakers thinking about how systems influence belief, bias, and social cognition over time.
Source: Glickman, M., Sharot, T. How human–AI feedback loops alter human perceptual, emotional and social judgements. Nat Hum Behav 9, 345–359 (2025). https://doi.org/10.1038/s41562-024-02077-2
Key ideas from Ep. 9: The Bias Loop
This episode reflects on a 2024 article in Nature Human Behaviour by Moshe Glickman and Tali Sharot, which explores how human–AI interactions create feedback loops that amplify human biases. The core finding: slightly biased AI doesn’t just reflect human judgement—it magnifies it. And when humans repeatedly engage with these systems, they often adopt those amplified biases as their own.
Here are three things worth paying attention to:
1. AI doesn't just mirror—it intensifies
Interacting with AI can shift our perceptions more than interacting with people.
* AI systems trained on slightly biased data tended to exaggerate that bias.
* When people then used those systems, their own bias increased—sometimes substantially.
* This happened across domains: perceptual tasks (e.g. emotion recognition), social categorisation, and even real-world image generation (e.g. AI-generated images of “financial managers”).
Unlike human feedback, AI judgements feel consistent, precise, and authoritative—making them more persuasive, even when wrong.
2. People underestimate AI’s influence
Participants thought they were being more influenced by accurate AI—but biased AI shaped their thinking just as much.
* Most participants didn’t realise how much the biased AI was nudging them.
* Feedback labelled as coming from “AI” had a stronger influence than when labelled as “human,” even when the content was identical.
* This suggests that perceived objectivity enhances influence—even when the output is flawed.
Subtle framing cues (like labelling) matter more than we assume in shaping trust and uptake.
3. Feedback loops are a design risk—and an opportunity
Bias can accumulate over time. But so can accuracy.
* Repeated exposure to biased AI increases human bias. But repeated exposure to accurate AI improved human judgement.
* Small changes in training data, system defaults, or how outputs are framed can shift trajectories over time.
* That means AI systems don’t just transmit information. They shape norms of perception and evaluation.
Design choices that reduce error or clarify uncertainty won’t just improve individual outputs—they could reduce cumulative bias at scale.
The study’s findings offer a clear behavioural mechanism for something often discussed in theory: how AI systems can influence society indirectly, through micro-shifts in user cognition. For developers, that means accounting not just for output accuracy, but for how people change through use. For behavioural scientists, it raises questions about how norms are formed in system-mediated environments. And for policy, it adds weight to the argument that user-facing AI isn’t just a content issue—it’s a cognitive one.
Always curious how others are approaching these design risks. Until next time.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com