Big data, small data, and AI oversight with David Sandberg
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
In this episode, we look at the actuarial principles that make models safer: parallel modeling, small data with provenance, and real-time human supervision. To help us, long-time insurtech and startup advisor David Sandberg, FSA, MAAA, CERA, joins us to share more about his actuarial expertise in data management and AI.
We also challenge the hype around AI by reframing it as a prediction machine and putting human judgment at the beginning, middle, and end. By the end, you might think about “human-in-the-loop” in a whole new way.
• Actuarial valuation debates and why parallel models win
• AI’s real value: enhance and accelerate the growth of human capital
• Transparency, accountability, and enforceable standards
• Prediction versus decision and learning from actual-to-expected
• Small data as interpretable, traceable fuel for insight
• Drift, regime shifts, and limits of regression and LLMs
• Mapping decisions, setting risk appetite, and enterprise risk management (ERM) for AI
• Where humans belong: the beginning, middle, and end of the system
• Agentic AI complexity versus validated end-to-end systems
• Training judgment with tools that force critique and citation
Cultural references:
- Foundation, AppleTV
- The Feeling of Power, Isaac Asimov
- Player Piano, Kurt Vonnegut
For more information, see Actuarial and data science: Bridging the gap.
What did you think? Let us know.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
- LinkedIn - Episode summaries, shares of cited articles, and more.
- YouTube - Was it something that we said? Good. Share your favorite quotes.
- Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.