Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

[NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.

NotificationsYou must be signed in to change notification settings

FMInference/H2O

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

License: MIT

Code for the paper "H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models"

Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher Ré, Clark Barrett, Zhangyang Wang, Beidi Chen

Overview

Large Language Models (LLMs), despite their recent impressive accomplishments, are notably cost-prohibitive to deploy, particularly for applications involving long-content generation, such as dialoguesystems and story writing. Often, a large amount of transient state information, referred to as the KVcache, is stored in GPU memory in addition to model parameters, scaling linearly with the sequencelength and batch size. In this paper, we introduce a novel approach for implementing the KV cache whichsignificantly reduces its memory footprint. Our approach is based on the noteworthy observation that asmall portion of tokens contributes most of the value when computing attention scores. We call thesetokens Heavy Hitters (H2). Through a comprehensive investigation, we find that (i) the emergence of H2is natural and strongly correlates with the frequent co-occurrence of tokens in the text, and (ii) removingthem results in significant performance degradation. Based on these insights, we propose Heavy HitterOracle (H2O), a KV cache eviction policy that dynamically retains a balance of recent and H2 tokens. Weformulate the KV cache eviction as a dynamic submodular problem and prove (under mild assumptions)a theoretical guarantee for our novel eviction algorithm which could help guide future work. We validatethe accuracy of our algorithm with OPT, LLaMA, and GPT-NeoX across a wide range of tasks. Ourimplementation of H2O with 20% heavy hitters improves the throughput over three leading inferencesystems DeepSpeed Zero-Inference, Hugging Face Accelerate, and FlexGen by up to 29×, 29×, and 3×on OPT-6.7B and OPT-30B. With the same batch size, H2O can reduce the latency by up to 1.9×.

Content

We provide two code to implement heavy-hitter oracle for efficient generative inference of large language models:

  • h2o_flexgen: Achieving higher throughput for LLM generation, the code is based onFlexGen.
  • h2o_hf: Testing the performance on different benchmarks, the code is based onHugging Face. Both simulation code (masking attention matrix) and real KV dropping implementation are provided (please refer to h2o_hf/utils_real_drop).

About

[NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp