Episodes

  • Kay Firth-Butterfield: Using AI Wisely
    Jun 26 2025

    Kevin Werbach interviews Kay Firth-Butterfield about how responsible AI has evolved from a niche concern to a global movement. As the world’s first Chief AI Ethics Officer and former Head of AI at the World Economic Forum, Firth-Butterfield brings deep experience aligning AI with human values. She reflects on the early days of responsible AI—when the field was dominated by philosophical debates—to today, when regulation such as the European Union's AI Act is defining the rules of the road.. Firth-Butterfield highlights the growing trust gap in AI, warning that rapid deployment without safeguards is eroding public confidence. Drawing on her work with Fortune 500 firms and her own cancer journey, she argues for human-centered AI, especially in high-stakes areas like healthcare and law. She also underscores the equity issues tied to biased training data and lack of access in the Global South, noting that AI is now generating data based on historical biases. Despite these challenges, she remains optimistic and calls for greater focus on sustainability, access, and AI literacy across sectors.

    Kay Firth-Butterfield is the founder and CEO of Good Tech Advisory LLC. She was the world’s first C-suite appointee in AI ethics and was the inaugural Head of AI and Machine Learning at the World Economic Forum from 2017 to 2023. A former judge and barrister, she advises governments and Fortune 500 companies on AI governance and remains affiliated with Doughty Street Chambers in the UK.

    Transcript


    Kay Firth-Butterfield Is Shaping Responsible AI Governance (Time100 Impact Awards)

    Our Future with AI Hinges on Global Cooperation

    Building an Organizational Approach to Responsible AI

    Co-Existing with AI - Firth-Butterfield's Forthcoming Book

    Show More Show Less
    30 mins
  • Dale Cendali: How Courts (and Maybe Congress!) Will Determine AI's Copyright Fate
    Jun 19 2025

    Kevin Werbach interviews Dale Cendali, one of the country’s leading intellectual property (IP) attorneys, to discuss how courts are grappling with copyright questions in the age of generative AI. Over 30 lP awsuits already filed against major generative AI firms, and the outcomes may shape the future of AI as well as creative industries.

    While we couldn't discuss specifics of one of the most talked-about cases, Thomson Reuters v. ROSS -- because Cendali is litigating it on behalf of Thomson Reuters -- she drew on her decades of experience in IP law to provide an engaging look at the legal battlefield and the prospects for resolution.

    Cendali breaks down the legal challenges around training AI on copyrighted materials—from books to images to music—and explains why these cases are unusually complex for copyright law. She discusses the recent US Copyright Office report on Generative AI training, what counts as infringement in AU outputs, and what is sufficient human authorship for copyirght protection of AI works. While precedent offers some guidance, Cendali notes that outcomes will depend heavily on the specific facts of each case. The conversation also touches on how well courts can adapt existing copyright law to these novel technologies, and the prospects for a legislative solution.

    Dale Cendali is a partner at Kirkland & Ellis, where she leads the firm’s nationwide copyright, trademark, and internet law practice. She has been named one of the 25 Icons of IP Law and one of the 100 Most Influential Lawyers in America. She also serves as an advisor to the American Law Institute’s Copyright Restatement project and sits on the Board of the International Trademark Association.

    Transcript


    Thompson Reuters Wins Key Fair Use Fight With AI Startup

    Dale Cendali - 2024 Law360 MVP

    Copyright Office Report on Generative AI Training

    Show More Show Less
    40 mins
  • Brenda Leong: Building AI Law Amid Legal Uncertainty
    Jun 12 2025
    Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen’s AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum. Transcript AI Audits: Who, When, How...Or Even If? Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda
    Show More Show Less
    37 mins
  • Shameek Kundu: AI Testing and the Quest for Boring Predictability
    Jun 5 2025

    Kevin Werbach interviews Shameek Kundu, Executive Director of AI Verify Foundation, to explore how organizations can ensure AI systems work reliably in real-world contexts. AI Verify, a government-backed nonprofit in Singapore, aims to build scalable, practical testing frameworks to support trustworthy AI adoption. Kundu emphasizes that testing should go beyond models to include entire applications, accounting for their specific environments, risks, and data quality. He draws on lessons from AI Verify’s Global AI Assurance pilot, which matched real-world AI deployers—such as hospitals and banks—with specialized testing firms to develop context-aware testing practices. Kundu explains that the rise of generative AI and widespread model use has expanded risk and complexity, making traditional testing insufficient. Instead, companies must assess whether an AI system performs well in context, using tools like simulation, red teaming, and synthetic data generation, while still relying heavily on human oversight. As AI governance evolves from principles to implementation, Kundu makes a compelling case for technical testing as a backbone of trustworthy AI.

    Shameek Kundu is Executive Director of the AI Verify Foundation. He previously held senior roles at Standard Chartered Bank, including Group Chief Data Officer and Chief Innovation Officer, and co-founded a startup focused on testing AI systems. Kundu has served on the Bank of England’s AI Forum, Singapore’s FEAT Committee, the Advisory Council on Data and AI Ethics, and the Global Partnership on AI.

    Transcript

    AI Verify Foundation

    Findings from the Global AI Assurance Pilot

    Starter Kit for Safety Testing of LLM-Based Applications

    Show More Show Less
    37 mins
  • Uthman Ali: Responsible AI in a Safety Culture
    May 29 2025

    Host Kevin Werbach interviews Uthman Ali, Global Responsible AI Officer at BP, to delve into the complexities of implementing responsible AI practices within a global energy company. Ali emphasizes how the culture of safety in the industry influences BP's willingness to engage in AI governance. He discusses the necessity of embedding ethical AI principles across all levels of the organization, emphasizing tailored training programs for various employee roles—from casual AI users to data scientists—to ensure a comprehensive understanding of AI’s ethical implications. He also highlights the importance of proactive governance, advocating for the development of ethical policies and procedures that address emerging technologies such as robotics and wearables. Ali’s approach underscores the balance between innovation and ethical responsibility, aiming to foster an environment where AI advancements align with societal values and regulatory standards.

    Uthman Ali is BP’s first Global Responsible AI Officer, and has been instrumental in establishing the company’s Digital Ethics Center of Excellence. He advises prominent organizations such as the World Economic Forum and the British Standards Institute on AI governance and ethics. Additionally, Ali contributes to research and policy discussions as an advisor to Oxford University's Oxethica spinout and various AI safety institutes.

    Transcript

    Prioritizing People and Planet as the Metrics for Responsible AI (IEEE Standards Association)

    Robocops and Superhumans: Dilemmas of Frontier Technology (2024 podcast interview)

    Show More Show Less
    33 mins
  • Karen Hao: Is Imperial AI Inevitable?
    May 22 2025

    Kevin Werbach interviews journalist and author Karen Hao about her new book Empire of AI, which chronicles the rise of OpenAI and the broader implications of generative artificial intelligence. Hao reflects on how the ethical challenges of AI have evolved, noting the shift from concerns like data privacy and algorithmic bias to more complex issues such as intellectual property violations, environmental impact, misleading user experiences, and concentration of power. She emphasizes that while some technical solutions exist, they are rarely implemented by developers, and foundational harms often occur before tools reach end users. Hao argues that OpenAI’s trajectory was not inevitable but instead the result of specific ideological beliefs, aggressive scaling decisions, and CEO Sam Altman’s singular fundraising prowess. She critiques the “pseudo-religious” ideologies underpinning Silicon Valley’s AI push, where utopian and doomer narratives coexist to justify rapid development. Hao outlines a more democratic alternative focused on smaller, task-specific models and stronger regulation to redirect AI’s future trajectory.

    Karen Hao has written about AI for publications such as The Atlantic, The Wall Street Journal, and MIT Tchnology Review. She was the first journalist to ever profile OpenAI, and leads The AI Spotlight Series, a program with the Pulitzer Center that trains thousands of journalists around the world on how to cover AI. She has also been a fellow with the Harvard Technology and Public Purpose program, the MIT Knight Science Journalism program, and the Pulitzer Center’s AI Accountability Network. She won an American Humanist Media Award in 2024, and an American National Magazine Award in 2022.

    Transcript

    Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

    Inside the Chaos at OpenAI (The Atlantic, 2023)

    Cleaning Up ChatGPT Takes Heavy Toll on Human Workers (Wall St. Journal, 2023)

    The New AI Panic (The Atlantic, 2023)

    The Messy, Secretive Reality Behind OpenAI’s Bid to Save the World (MIT Technology Review, 2020)

    Show More Show Less
    35 mins
  • Jaime Banks: How Users Perceive AI Companions
    May 15 2025

    AI companion applications, which create interactive personas for one-on-one conversations, are incredibly popular. However, they raise a number of challenging ethical, legal, and psychological questions. In this episode, Kevin Werbach speaks with researcher Jaime Banks about how users view their conversations with AI companions, and the implications for governance. Banks shares insights from her research on mind-perception, and how AI companion users engage in a willing suspension of disbelief similar to watching a movie. She highlights both potential benefits and dangers, as well as novel issues such as the real feelings of loss users may experience when a companion app shuts down. Banks advocates for data-driven policy approaches rather than moral panic, suggesting responses such as an "AI user's Bill of Rights" for these services.

    Jaime Banks is Katchmar-Wilhelm Endowed Professor at the School of Information Studies at Syracuse University. Her research examines human-technological interaction, including social AI, social robots, and videogame avatars. She focuses on relational construals of mind and morality, communication processes, and how media shape our understanding of complex technologies. Her current funded work focuses on social cognition in human-AI companionship and on the effects of humanizing language on moral judgments about AI.

    Transcript

    ‘She Helps Cheer Me Up’: The People Forming Relationships With AI Chatbots (The Guardian, April 2025)

    Can AI Be Blamed for a Teen's Suicide? (NY Times, October 2024)

    Beyond ChatGPT: AI Companions and the Human Side of AI (Syracuse iSchool video)

    Show More Show Less
    30 mins
  • Kelly Trindel: AI Governance Across the Enterprise? All in a Day’s Work
    May 8 2025

    In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday’s legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the company. The conversation covers how Workday evaluates model risks based on context and potential human impact, especially in sensitive areas like hiring and performance evaluation. Trindel outlines how the company conducts bias testing, maintains documentation, and uses third-party audits to support transparency and trustworthiness. She also discusses how Workday is preparing for emerging regulatory frameworks, including the EU AI Act, and how internal governance systems are designed to be flexible in the face of evolving policy and technological change. Other topics include communicating AI risks to customers, sustaining post-deployment oversight, and building trust through accountability infrastructure.

    Dr. Kelly Trindel directs Workday’s AI governance program. As a pioneer in the responsible AI movement, Kelly has significantly contributed to the field, including testifying before the U.S. Equal Employment Opportunity Commission (EEOC) and later leading an EEOC task force on ethical AI—one of the government’s first. With more than 15 years of experience in quantitative science, civil rights, public policy, and AI ethics, Kelly’s influence and commitment to responsible AI are instrumental in driving the industry forward and fostering AI solutions that have a positive societal impact.

    Transcript

    Responsible AI: Empowering Innovation with Integrity

    Putting Responsible AI into Action (video masterclass)

    Show More Show Less
    37 mins