Walter Haydock, StackAware: In Search Of AI Governance Certification
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
Summary
Walter Haydock draws a direct line from military risk management to the enterprise AI challenge. His argues that organizations need to stop doing "math with colors," and move toward quantitative assessment that assigns dollar values to potential AI failures. Much of the conversation in this episode focuses on ISO 42001, the global standard for AI management systems, which Haydock has championed and which his own firm has gone through. He draws a three-part taxonomy of AI governance frameworks: legislation you either comply with or don't, voluntary self-attestable frameworks like the NIST AI RMF, and externally certifiable standards like ISO 42001 that bring independent verification.
Haydock outlines a forward-looking vision in which certification, insurance, and legal safe harbors reinforce one another. Machine-readable audit data will eventually allow insurers to make informed underwriting decisions about AI risk, reducing uncertainty for both enterprises and their customers. Though, as he acknowledges, we are still far from that environment, with AI audits today still roughly 90% manual.
Walter Haydock is the founder of StackAware, which helps AI-powered companies manage security, compliance, and privacy risk. Before entering the private sector, he served as a reconnaissance and intelligence officer in the U.S. Marine Corps, as a professional staff member for the Homeland Security Committee of the U.S. House of Representatives, and as an analyst at the National Counterterrorism Center. He is a graduate of the United States Naval Academy, Georgetown University's School of Foreign Service, and Harvard Business School.
Transcript
Deploy Securely (Haydock's Substack)