Offloading LLM Models and KV Caches to NVMe SSDs cover art

Offloading LLM Models and KV Caches to NVMe SSDs

Offloading LLM Models and KV Caches to NVMe SSDs

Listen for free

View show details

About this listen

This March 2025 paper examines the input/output (I/O) characteristics of offloading large language model (LLM) components to NVMe SSDs during inference, a critical solution for overcoming GPU memory limitations with ever-growing LLMs. Researchers analyzed block-layer I/O traces from two prominent LLM frameworks, DeepSpeed and FlexGen, to understand how model weights and key-value (KV) caches are handled. The findings indicate that asynchronous I/O using libaio significantly outperforms POSIX for tensor transfers, although neither method fully saturates the NVMe SSD's theoretical bandwidth. For model offloading, I/O is predominantly characterized by 128KiB reads, primarily occurring at the beginning of the inference process, while KV cache offloading involves both reads and writes of similar size, with read bandwidth being substantially higher. Ultimately, the research suggests that modern NVMe SSDs are capable of supporting current LLM inference workloads but highlights opportunities for further optimization in SSD design and KV cache management.


Source:

https://dl.acm.org/doi/10.1145/3719330.3721230

No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.