Towards Optimal Caching and Model Selection for Large Model Inference

Towards Optimal Caching and Model Selection for Large Model Inference” by Banghua Zhu, Ying Sheng, Lianmin Zheng, Clark Barrett, Michael Jordan, and Jiantao Jiao. In Advances in Neural Information Processing Systems 36 (NeurIPS 2023), (A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, eds.), 2023, pp. 59062-59094.

Abstract

Large Language Models (LLMs) and other large foundation models have achieved impressive results, but their size exacerbates existing resource consumption and latency challenges. In particular, the large-scale deployment of these models is hindered by the significant resource requirements during inference. In this paper, we study two approaches for mitigating these challenges: employing a cache to store previous queries and learning a model selector to choose from an ensemble of models for query processing. Theoretically, we provide an optimal algorithm for jointly optimizing both approaches to reduce the inference cost in both offline and online tabular settings. By combining a caching algorithm, namely Greedy Dual Size with Frequency (GDSF) or Least Expected Cost (LEC), with a model selector, we achieve optimal rates in both offline and online settings. Empirically, simulations show that our caching and model selection algorithm greatly improves over the baselines, with up to 50x improvement over the baseline when the ratio between the maximum cost and minimum cost is 100. Experiments on real datasets show a 4.3x improvement in FLOPs over the baseline when the ratio for FLOPs is 10, and a 1.8x improvement in latency when the ratio for average latency is 1.85.

BibTeX entry:

@inproceedings{ZSZ+23b,
   author = {Banghua Zhu and Ying Sheng and Lianmin Zheng and Clark
	Barrett and Michael Jordan and Jiantao Jiao},
   editor = {A. Oh and T. Neumann and A. Globerson and K. Saenko and M.
	Hardt and S. Levine},
   title = {Towards Optimal Caching and Model Selection for Large Model
	Inference},
   booktitle = {Advances in Neural Information Processing Systems 36
	(NeurIPS 2023)},
   volume = {36},
   pages = {59062--59094},
   publisher = {Curran Associates, Inc.},
   year = {2023},
   url =
	{https://proceedings.neurips.cc/paper_files/paper/2023/file/b914a8fcea5c176cf1ed75c762ce27fd-Paper-Conference.pdf}
}

(This webpage was created with bibtex2web.)