
Building a Local Large Language Model (LLM)
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to basket failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from Wish List failed.
Please try again later
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
Another #ComputingWeek talk turned into a podcast! Two Red Hat software engineers, both recent graduates of SETU, returned to discuss the issues surrounding running your own LLM on a local machine, how models and datasets are built and reduced (quantised) so as to run on a laptop rather than an array of servers. Mark Campbell and Dimitri Saradkis provided excellent insight on the technical issues surround this topic, before getting into some of the ethical and moral issues with host Rob O'Connor at the end.
You can connect with all the people on this podcast on LinkedIn at:
- Mark Campbell https://www.linkedin.com/in/mark-campbell-76846b194/
- Dimitri Saradakis https://www.linkedin.com/in/dimitri-saridakis-32a087139/
- Rob O'Connor https://www.linkedin.com/in/robertoconnorirl/
Here are links to some of the tools referenced in the podcast:
- Red Hat OpenShift AI https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai
- LMStudio https://lmstudio.ai/
- Ollama https://ollama.ai/
- HuggingFace https://huggingface.co/
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.