
Offloading LLM Models and KV Caches to NVMe SSDs
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
This March 2025 paper examines the input/output (I/O) characteristics of offloading large language model (LLM) components to NVMe SSDs during inference, a critical solution for overcoming GPU memory limitations with ever-growing LLMs. Researchers analyzed block-layer I/O traces from two prominent LLM frameworks, DeepSpeed and FlexGen, to understand how model weights and key-value (KV) caches are handled. The findings indicate that asynchronous I/O using libaio significantly outperforms POSIX for tensor transfers, although neither method fully saturates the NVMe SSD's theoretical bandwidth. For model offloading, I/O is predominantly characterized by 128KiB reads, primarily occurring at the beginning of the inference process, while KV cache offloading involves both reads and writes of similar size, with read bandwidth being substantially higher. Ultimately, the research suggests that modern NVMe SSDs are capable of supporting current LLM inference workloads but highlights opportunities for further optimization in SSD design and KV cache management.
Source:
https://dl.acm.org/doi/10.1145/3719330.3721230