The Four Pillars of Trustworthy AI—and Who Owns Them cover art

The Four Pillars of Trustworthy AI—and Who Owns Them

The Four Pillars of Trustworthy AI—and Who Owns Them

Listen for free

View show details

About this listen

Trust in AI isn’t a vibe—it’s something you can intentionally design for (or accidentally break). In this episode, Galen sits down with Cal Al-Dhubaib to unpack “trust engineering”: a shared toolkit that helps cross-functional teams (engineering, UX, governance, risk, and business) talk about the same trust risks in the same language. They get into why “boring AI is safe AI,” how guardrails and human handoffs actually preserve trust, and why the biggest failures often aren’t the model—they’re the systems (and incentives) wrapped around it.

You’ll also hear real-world examples of trust going sideways—from biased outcomes to hallucinated “gaslighting,” to AI-assisted deliverables causing accuracy issues—and what project leaders can do to prevent finger-pointing when it happens.

Resources from this episode:

  • Join the Digital Project Manager Community
  • Subscribe to the newsletter to get our latest articles and podcasts
  • Connect with Cal on LinkedIn
  • Check out Further
  • AI Incident Database
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.