• Ep. 6: When AI Feels Human

  • Apr 18 2025
  • Length: 12 mins
  • Podcast

Ep. 6: When AI Feels Human

  • Summary

  • This episode offers a short overview of the article “When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design”(Maeda & Quan-Haase, 2024) that explores how conversational AI systems, designed with human-like features, foster parasocial relationships through language, affective cues, and simulated reciprocity. These design choices invite users to assign social roles to chatbots, project meaning into partial responses, and develop a kind of trust that feels relational but is entirely one-sided. This form of "parasocial trust" can deepen engagement and usability, but it also raises new ethical questions—especially when role-play begins to displace critical judgement or blur the boundary between simulated and social understanding.This episode explores the phenomenon of parasocial trust in human–AI interaction, where chatbots designed to mimic human conversation begin to feel socially present. Through natural language, simulated care, and rhetorical cues, these systems invite users to assign them roles, fill in missing context, and interpret responses as if they were part of a reciprocal relationship. That trust (affective rather than cognitive) can deepen engagement, nut it also carries risks, especially when design features create the illusion of mutuality where none exists.The reflections in this episode are based on a paper presented at ACM FAccT 2024, an interdisciplinary conference on fairness, accountability, and transparency in AI. The conference is hosted by the Association for Computing Machinery (ACM), the world’s largest scientific and educational computing society. Submissions to FAccT are peer-reviewed and held to journal-level standards, drawing contributions from computer science, law, social science, and the humanities.Key ideas from Ep. 6: When AI Feels HumanThe key theme of this episode is how chatbots can start to feel socially present despite offering no real reciprocity, and how design choices like natural language, simulated care, and rhetorical warmth help cultivate a one-sided but affectively strong connection that feels intuitive, even relational. Here are a few ideas that stood out:1. Anthropomorphic cues make projection easyChatbots don't just answer questions—they mirror human conversation patterns.* Small phrases like “how can I help?” or “I understand” signal care, even if no understanding exists* Turn-taking, affirmations, and apologies simulate mutual engagement* Users often respond by assigning the chatbot a social role—assistant, mentor, therapistThis feels intuitive, but it isn’t neutral. The interface subtly invites projection.2. Trust can be affective—even when it shouldn’t beThe trust users place in chatbots isn’t always about accuracy or performance.* “Parasocial trust” develops through the feeling of being heard or helped* This kind of trust doesn’t require competence—it’s built on cues of warmth and responsiveness* That’s what makes it sticky, and sometimes hard to interrogateIt’s easier to trust what feels familiar than to evaluate what’s actually true. 3. Ambiguity fuels role assignmentWhen systems don’t clearly explain themselves, users fill in the blanks.* Design ambiguity leaves space for users to imagine intention or expertise* The more convincing the persona, the more likely users are to misattribute judgement or care* Over time, these interactions can reshape how people think about where support or validation comes fromThis isn’t just a UX issue. It’s a behavioural one—because it changes how people interpret the interaction.These dynamics aren’t inherently harmful. But they can be ethically slippery, especially when chatbots are used in contexts like education, wellbeing, or personal advice. One thing that stood out in the paper is how quickly trust builds—and how little users are asked to question it.If you're working with systems that simulate social presence, or even just using them regularly, it's worth paying attention to how quickly functional responses become relational ones. What roles are you projecting? And what’s being displaced in the process?If this resonates—or doesn’t—I’d welcome your reflections. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Show More Show Less
activate_mytile_page_redirect_t1

What listeners say about Ep. 6: When AI Feels Human

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.