AI Ethics Now cover art

AI Ethics Now

AI Ethics Now

By: Tom Ritchie Jennie Mills IATL WIHEA University of Warwick
Listen for free

About this listen

AI Ethics Now is a podcast dedicated to exploring the complex issues surrounding artificial intelligence from a non-specialist perspective, including bias, ethics, privacy, and accountability. Join us as we discuss the challenges and opportunities of AI and work towards a future where technology benefits society as a whole. This podcast was first developed by Dr Tom Ritchie and Dr Jennie Mills as part of The AI Revolution: Ethics, Technology, and Society module, taught as part of IATL at the University of Warwick.Tom Ritchie, Jennie Mills, IATL, WIHEA, University of Warwick
Episodes
  • 12. AI and Dialogic Feedback: Reframing Student Agency Through AI Partnerships
    Feb 2 2026

    What happens when AI becomes a dialogic partner in feedback rather than a replacement for human judgment?

    Dr Viktoria Magne, Dr Rebecca Mace, Sarah Hooper, and Dr Sharon Vince from the University of West London and University of Worcester reveal how structured AI conversations are helping students engage more deeply with feedback whilst keeping academic judgment clearly human-led.

    This conversation explores how AI creates low-stakes, judgment-free spaces where students can question, challenge, and co-construct understanding without fear of looking silly or upsetting relationships with staff. The team shares how they've designed reflective cycles using structured prompts that position students as active agents rather than passive recipients, and why this matters for equity, emotional safety, and critical AI literacy.

    We discuss the difference between transactional and dialogic AI use, why feedback shouldn't feel like static judgment, how AI helps students engage in "conversation with themselves", and what happens when first-generation students gain access to a network they've never had before. The team explains why digital literacy means learning to question AI outputs, not just operate tools, and how transparency around staff AI use builds trust.

    This episode continues our new short series featuring conversations from the ⁠⁠Building Bridges: A Symposium on Human-AI Interaction⁠⁠ held at the University of Warwick on 21 November 2025. The symposium was organised by ⁠⁠Dr Yanyan Li⁠⁠, Xianzhi Chen, and Kaiqi Yu, and jointly funded by the Institute of Advanced Study Conversations Scheme and the Doctoral College Networking Fund, with sponsorship from Warwick Students' Union.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    Show More Show Less
    25 mins
  • 11. AI and Assessments: When Students Ask "Does This Sound Like Me?"
    Jan 18 2026

    What happens when students delegate not just writing, but reasoning itself to AI?

    Dr Chahna Gonsalves, Senior Lecturer at King's Business School, reveals how generative AI is transforming critical thinking in higher education through what she calls "epistemic offloading", the process of outsourcing intellectual work to tools like ChatGPT.

    This conversation examines how students are using AI to interpret readings, generate argument structures, and pre-evaluate their own work, shifting responsibility for core intellectual tasks. Chahna explores why AI prizes polish over depth, how this affects students' evaluative judgment, and what happens when students ask "does this sound like me?"

    We discuss the equity implications of tech-savviness, why reflexive AI use matters more than bans, and how Bloom's Taxonomy reveals which cognitive processes students readily offload versus protect. Chahna argues we need transparent conversations about delegation, judgment, and what truly requires human reasoning.

    Essential listening for anyone grappling with AI's role in learning, assessment design, and the future of thinking itself.

    This episode continues our new short series featuring conversations from the ⁠Building Bridges: A Symposium on Human-AI Interaction⁠ held at the University of Warwick on 21 November 2025. The symposium was organised by ⁠Dr Yanyan Li⁠, Xianzhi Chen, and Kaiqi Yu, and jointly funded by the Institute of Advanced Study Conversations Scheme and the Doctoral College Networking Fund, with sponsorship from Warwick Students' Union.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    Show More Show Less
    32 mins
  • 10. AI and Dependence: Are We Misdiagnosing the Harms?
    Jan 4 2026

    Do you use ChatGPT or Claude daily for work? Mark Carrigan, Senior Lecturer in Education at Manchester Institute of Education, joins the podcast to discuss why we might be misdiagnosing the harms of generative AI. His research suggests the problems aren't inherent to the technology itself, but arise when AI systems meet the already broken bureaucracies of higher education and other sectors.

    Mark introduces the LLM Interaction Cycle, a framework he developed with philosopher of technology, Milan Stürmer, to understand how we engage with AI over time through three phases: positioning (how we assign roles to the AI), articulation (how we put our needs into words), and attunement (the sense that the AI understands us). He explains how use that begins as purely transactional often drifts toward something more affective as models build memory and context about us, and why this drift matters for how we think about ethical AI use.

    We go on to explore teacher agency in the age of generative AI, examining why fear of appearing ignorant prevents honest conversations between educators and students. Mark discusses three key risks facing universities:

    • lock-in (dependency on specific platforms),
    • loss of reflection (increasingly habitual rather than thoughtful use), and;
    • commercial capture (vendor interests shaping institutional practices).

    He argues that reflective use isn't just beneficial but ethically necessary, yet the pressures facing academics and students make reflection increasingly difficult.

    The conversation finishes by examining why universities in financial crisis are particularly vulnerable to both the promises and pitfalls of AI adoption, how institutional AI strategies risk creating new waves of disruption, and why understanding student realities (including significant paid work commitments) is essential to addressing concerns about AI in education. Mark concludes by making the case that we cannot understand the problems of generative AI without understanding the wider systemic crisis in higher education.

    This episode launches our new short series featuring conversations from the Building Bridges: A Symposium on Human-AI Interaction held at the University of Warwick on 21 November 2025. The symposium was organised by Dr Yanyan Li, Xianzhi Chen, and Kaiqi Yu, and jointly funded by the Institute of Advanced Study Conversations Scheme and the Doctoral College Networking Fund, with sponsorship from Warwick Students' Union.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    Show More Show Less
    35 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.