Building on AI: How Much Risk Can You Handle?
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to basket failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from Wish List failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
In this brief on-the-go episode, Tom discusses the risks of building businesses on centralized AI infrastructure. Sparked by Cloudflare's recent major outage, he explores what happens when AI vendors go down and how companies should think about their risk appetite when depending on services like OpenAI, Anthropic, or other AI providers. From wrapping entire business strategies around AI APIs to considering self-hosted alternatives, Tom breaks down the strategic considerations for both startups and established businesses looking to integrate AI into their core operations.
Key Topics
- The Cloudflare outage and its implications for internet infrastructure
- Risk management when building on third-party AI vendors
- Different deployment options: OpenAI direct, Azure AI playground, or self-hosted models
- How risk appetite should differ between startups and established businesses
- Strategic considerations for making AI a core part of your business
- The AI bubble discussion and vendor dependency concerns
Need help navigating AI infrastructure decisions for your business? Get in touch at https://www.concepttocloud.com
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.