The Road to Accountable AI cover art

The Road to Accountable AI

The Road to Accountable AI

By: Kevin Werbach
Listen for free

Summary

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.2024 Economics
Episodes
  • Var Shankar: AI Governance for Smaller Organizations
    May 7 2026

    Var Shankar makes the case that most AI governance guidance is built for large, sophisticated, multifunctional global enterprises — and that this leaves out the roughly half of American workers employed at organizations with fewer than 500 people. Through the Council on AI Governance, the nonprofit he leads with Alexis Cook, he is trying to fill that gap with open, current, and pragmatic resources, including an AI Governance Playbook organized around four focus areas: strategy, risk and compliance, workforce literacy, and operational management. He tells Kevin that the case for AI governance no longer needs to be made; what smaller organizations now need is help asking vendors the right questions and clarifying who owns what internally when a few people are doing many jobs.

    The conversation then turns to the parts of the field Var thinks are most undercooked. Workforce literacy, he argues, is the focus area most often neglected because it functions as a vitamin rather than a painkiller — long-term, hard to resource, and easy to reduce to a training module when what is actually needed is hands-on involvement in pilots and documentation. He explains why healthcare offers an unusually strong foundation for AI assurance, with its existing regulatory architecture, comfort with use-case variability, and tradition of post-deployment monitoring, and he describes assurance itself as the connective tissue between an organization and the outside world — distinct from regulation and from internal governance, not a substitute for either. Drawing on a pilot he co-authored on with the Standards Council of Canada testing system-level certification at a Canadian bank, he highlights two surprising lessons: that even simplified certification criteria get interpreted differently by different actors, and that even one of the world's most forward-thinking public standards bodies lacked the technical capacity to play standard-setter for something as dynamic as an AI system. He closes with practical advice for risk and compliance professionals: start with the positive vision of what the organization is trying to do with AI, observe how existing IT, data, and security governance already work, and identify which standards ecosystems the organization is already plugged into.

    Var Shankar is Executive Director of the Council on AI Governance, an independent nonprofit developing open AI governance resources for organizations of all sizes. He previously served as Executive Director of the Responsible AI Institute and as Chief AI and Privacy Officer at Enzai, a regtech AI compliance startup. An attorney by training and a graduate of Harvard Law School, he practiced law at Cravath, Swaine & Moore and earlier worked on the Clinton Global Initiative and with the government of British Columbia on digital government and COVID response. He teaches AI governance at Purdue, where he has helped develop a master's-level AI auditing program, and serves on the OECD Network of Experts on AI, the World Economic Forum's AI Governance Alliance, and the Brookings Forum for Cooperation on AI. He co-developed Kaggle's Intro to AI Ethics course with Alexis Cook.

    Transcript

    Council on AI Governance: AI Governance Playbook

    Context-specific certification of AI systems: a pilot in the financial industry (AI and Ethics, 2025)

    Standards Council of Canada AI accreditation pilot

    Show More Show Less
    29 mins
  • Katie Fowler (Thompson Reuters Foundation): How 3,000 Companies Approach AI Governance
    Apr 30 2026

    Good data about how companies are implementing AI governance programs is essential both for organizations to benchmark their efforts, and for observers to understand the state of development. In this episode, Katie Fowler, Director of Responsible Business at the Thomson Reuters Foundation, joins Kevin Werbach to discuss the findings of Responsible AI in Practice, a new report drawing on a global dataset of roughly 3,000 companies across 13 sectors.

    Fowler unpacks the report's central finding: an enormous gap between corporate AI ambition and operational governance, with 44 percent of companies reporting an AI strategy but only 13 percent publicly committing to a formal governance framework. She argues that the gap is structural rather than just a disclosure failure, noting that AI expertise often sits deep within technical teams rather than at the leadership levels responsible for organization-wide rollout. She points to striking regional variation in workforce protections, the EU AI Act's emergence as a de facto global reference framework even outside Europe, and pushes back on the narrative that regulation stifles innovation. Looking forward, she discusses how investors are using transparency as a proxy for risk management in the absence of mature responsible AI metrics, and outlines the long-term vision of building a dataset robust enough to support a responsible AI index tied to financial materiality.

    Katie Fowler is Director of Responsible Business at the Thomson Reuters Foundation, the independent charity affiliated with Thomson Reuters. She leads initiatives including the Workforce Disclosure Initiative (a global platform collecting survey data on how companies treat workers across their direct operations and supply chains) and the AI Company Data Initiative, launched in partnership with UNESCO. Before joining the Foundation, Fowler held leadership roles at The Social Innovation Partnership and Chance for Childhood.

    Transcript

    Responsible AI in Practice: 2025 Global Insights from the AI Company Data Initiative

    Why a Companywide Effort Is Key to Responsible and Trustworthy AI Adoption (Katie Fowler, techUK guest blog, 2025)

    Show More Show Less
    38 mins
  • Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust
    Apr 23 2026

    AI-generated deepfakes are exploding in volume and quality, posing frightening challenges for public discourse, security, safety, and more. My guest, Henry Ajder, has been mapping the deepfake landscape since before most people had heard the term. In this conversation, he describes the dramatic changes in realism, efficiency, accessibility, and functionality of synthetic media tools since he published the first comprehensive census of deepfakes in 2019. Ajder describes the current moment as one of "epistemic nihilism," where people cannot reliably distinguish real from synthetic content and the available technological responses are not yet at a level of categorical trust. He introduces a framework of "deception, doubt, and degradation" for understanding deepfake harms, and draws a distinction between the clearly malicious, the clearly beneficial, and a vast unsettling middle ground of uses that society has not yet figured out how to evaluate.

    On the response side, Ajder warns that media literacy advice is not just outdated but actively harmful, because it gives people false confidence in their ability to spot fakes. Detection tools, watermarking, and content provenance standards like C2PA, while valuable, each have real limitations. Ajder's practical advice for organizations centers on red-teaming, understanding what your tool is actually for and who it serves, and recognizing that authenticity is a strategic asset in a synthetic age.

    Henry Ajder is the founder of Latent Space Advisory and one of the world's foremost experts on deepfakes and generative AI. He authored the landmark 2019 State of Deepfakes report, and has since advised organizations including Meta, Adobe, the UK Government, the EU Commission, the US FTC, and the World Economic Forum. He co-leads the University of Cambridge's Generative AI in Business programme, and sits on Meta's Reality Labs Advisory Council.

    Transcript


    Latent Space Advisory

    The State of Deepfakes: Landscape, Threats, and Impact (2019)

    The Future Will Be Synthesised (BBC Radio 4 Documentary Series, 2022)

    Show More Show Less
    39 mins
adbl_web_anon_alc_button_suppression_c
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.