The Road to Accountable AI cover art

The Road to Accountable AI

The Road to Accountable AI

By: Kevin Werbach
Listen for free

About this listen

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.2024 Economics
Episodes
  • Ray Eitel-Porter, Co-Author of Governing the Machine: The Confidence to Use AI
    Mar 12 2026

    Ray Eitel-Porter, former Global Lead for Responsible AI at Accenture and co-author of the new book, Governing the Machine, discusses how enterprises can move from abstract AI principles to practical governance. He emphasizes that organizations can only realize AI's benefits if responsibility is embedded into everyday business processes rather than treated as a standalone compliance exercise. Drawing on his experience leading global data and AI programs, Eitel-Porter explains how the release of ChatGPT transformed enterprise attitudes toward AI, accelerating adoption while exposing risks such as hallucinations, reliability failures, and reputational harm. Effective governance has evolved from static principles to operational controls, including workflow checkpoints, red teaming, and technical guardrails, particularly for generative AI systems with inherently probabilistic outputs. On risk, he stresses that not all AI use cases require the same level of scrutiny; governance should scale with potential impact and harm, focusing on what an AI system is intended to do so that non-technical teams can surface high-risk use cases without incentives to downplay risk.

    On regulation, Eitel-Porter notes that despite uncertainty around the EU AI Act, many multinational companies are treating it as a global baseline, similar to GDPR, while contrasting this with more deregulatory signals from the United States and questioning the global influence of the UK's middle-ground approach. He also shares insights from Governing the Machine, co-authored with Miriam Bogle and Paul Donkhan, emphasizing that AI governance is not a barrier to innovation but the foundation that allows organizations to deploy AI at scale with confidence and control.

    Ray Eitel-Porter is a Senior Advisor at Accenture and the former Global Lead for Responsible AI, where he designed and scaled AI governance programs for multinational organizations. He previously led Accenture's data and AI practice in the UK and has over a decade of experience advising companies on responsible AI, data governance, and emerging technology risk. Eitel-Porter is the co-author of Governing the Machine: How to Navigate the Risks of AI and Unlock Its True Potential (Bloomsbury, 2025) and has led multi-year programs across public and private sectors, including global banks, retailers, and health brands.

    Transcript


    Governing the Machine (Bloomsbury 2025)

    Lessons from the Frontline – Designing and Implementing AI Governance (AI Journal)

    Show More Show Less
    33 mins
  • Alexandru Voica: Responsible AI Video
    Dec 18 2025

    Alexandru Voica, Head of Corporate Affairs and Policy at Synthesia, discusses how the world's largest enterprise AI video platform has approached trust and safety from day one. He explains Synthesia's "three C's" framework—consent, control, and collaboration: never creating digital replicas without explicit permission, moderating every video before rendering, and engaging with policymakers to shape practical regulation. Voica acknowledges these safeguards have cost some business, but argues that for enterprise sales, trust is competitively essential. The company's content moderation has evolved from simple keyword detection to sophisticated LLM-based analysis, recently withstanding a rigorous public red team test organized by NIST and Humane Intelligence.

    Voica criticizes the EU AI Act's approach of regulating how AI systems are built rather than focusing on harmful outcomes, noting that smaller models can now match frontier capabilities while evading compute-threshold regulations. He points to the UK's outcome-focused approach—like criminalizing non-consensual deepfake pornography—as more effective. On adoption, Voica argues that AI companies should submit to rigorous third-party audits using ISO standards rather than publishing philosophical position papers—the thesis of his essay "Audits, Not Essays." The conversation closes personally: growing up in 1990s Romania with rare access to English tutoring, Voica sees AI-powered personalized education as a transformative opportunity to democratize learning.

    Alexandru Voica is the Head of Corporate Affairs and Policy at Synthesia, the UK's largest generative AI company and the world's leading AI video platform. He has worked in the technology industry for over 15 years, holding public affairs and engineering roles at Meta, NetEase, Ocado, and Arm. Voica holds an MSc in Computer Science from the Sant'Anna School of Advanced Studies and serves as an advisor to MBZUAI, the world's first AI university.

    Transcript

    Audits, Not Essays: How to Win Trust for Enterprise AI (Transformer)

    Synthesia's Content Moderation Systems Withstand Rigorous NIST, Humane Intelligence Red Team Test (Synthesia)

    Computerspeak Newsletter

    Show More Show Less
    38 mins
  • Blake Hall: Safeguarding Identity in the AI Era
    Dec 11 2025

    In this episode, Blake Hall, CEO of ID.me, discusses the massive escalation in online fraud driven by generative AI, noting that attacks have evolved from "Nigerian prince" scams to sophisticated, scalable social engineering campaigns that threaten even the most digital-savvy users. He explains that traditional knowledge-based verification methods are now obsolete due to data breaches, shifting the security battleground to biometric and possession-based verification. Hall details how his company uses advanced techniques—like analyzing light refraction on skin versus screens—to detect deepfakes, while emphasizing a "best of breed" approach that relies on government-tested vendors.

    Beyond the threats, Hall outlines a positive vision for a digital wallet that functions as a user-controlled "digital twin," allowing individuals to share only necessary data (tokenized identity) rather than overexposing personal information. He argues that government agencies must play a stronger role in validating core identity attributes to stop synthetic fraud and suggests that future AI "agents" will rely on cryptographically signed credentials to act on our behalf securely. Ultimately, he advocates for a model where companies "sell trust, not data," empowering users to control their own digital identity across finance, healthcare, and government services.

    Blake Hall is the Co-Founder and CEO of ID.me, a digital identity network with over 150 million members that simplifies how individuals prove and share their identity online. A former U.S. Army Ranger, Hall led a reconnaissance platoon in Iraq and was awarded two Bronze Stars, including one for valor, before earning his MBA from Harvard Business School. He has been recognized as CEO of the Year by One World Identity and an Entrepreneur of the Year by Ernst & Young for his work in pioneering secure, user-centric digital identity solutions.

    Transcript

    He Once Hunted Terrorists in Iraq. Now He Runs a $2 Billion Identity Verification Company (Inc., November 11, 2025)

    "No Identity Left Behind": How Identity Verification Can Improve Digital Equity (ID.me)

    Show More Show Less
    34 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.