H_2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models

H_2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models” by Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher Ré, Clark Barrett, Zhangyang "Atlas" Wang, and Beidi Chen. In Advances in Neural Information Processing Systems 36 (NeurIPS 2023), (A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, eds.), 2023, pp. 34661-34710.

Abstract

Large Language Models (LLMs), despite their recent impressive accomplishments, are notably cost-prohibitive to deploy, particularly for applications involving long-content generation, such as dialogue systems and story writing. Often, a large amount of transient state information, referred to as the KV cache, is stored in GPU memory in addition to model parameters, scaling linearly with the sequence length and batch size. In this paper, we introduce a novel approach for implementing the KV cache which significantly reduces its memory footprint. Our approach is based on the noteworthy observation that a small portion of tokens contributes most of the value when computing attention scores. We call these tokens Heavy Hitters (H_2). Through a comprehensive investigation, we find that (i) the emergence of H_2 is natural and strongly correlates with the frequent co-occurrence of tokens in the text, and (ii) removing them results in significant performance degradation. Based on these insights, we propose Heavy Hitter Oracle (H_2O), a KV cache eviction policy that dynamically retains a balance of recent and H_2 tokens. We formulate the KV cache eviction as a dynamic submodular problem and prove (under mild assumptions) a theoretical guarantee for our novel eviction algorithm which could help guide future work. We validate the accuracy of our algorithm with OPT, LLaMA, and GPT-NeoX across a wide range of tasks. Our implementation of H_2O with 20% heavy hitters improves the throughput over three leading inference systems DeepSpeed Zero-Inference, Hugging Face Accelerate, and FlexGen by up to 29x , 29x , and 3x on OPT-6.7B and OPT-30B. With the same batch size, H_2O can reduce the latency by up to 1.9x.

BibTeX entry:

@inproceedings{ZSZ+23a,
   author = {Zhenyu Zhang and Ying Sheng and Tianyi Zhou and Tianlong Chen
	and Lianmin Zheng and Ruisi Cai and Zhao Song and Yuandong Tian
	and Christopher R{\'e} and Clark Barrett and Zhangyang "Atlas"
	Wang and Beidi Chen},
   editor = {A. Oh and T. Neumann and A. Globerson and K. Saenko and M.
	Hardt and S. Levine},
   title = {H{\it _2}O: Heavy-Hitter Oracle for Efficient Generative
	Inference of Large Language Models},
   booktitle = {Advances in Neural Information Processing Systems 36
	(NeurIPS 2023)},
   volume = {36},
   pages = {34661--34710},
   publisher = {Curran Associates, Inc.},
   year = {2023},
   url =
	{https://proceedings.neurips.cc/paper_files/paper/2023/file/6ceefa7b15572587b78ecfcebb2827f8-Paper-Conference.pdf}
}

(This webpage was created with bibtex2web.)