- Notifications
You must be signed in to change notification settings - Fork81
A high-throughput and memory-efficient inference and serving engine for LLMs
License
HabanaAI/vllm-fork
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
|Intel® Gaudi® README |Documentation |Blog |Paper |Discord |Twitter/X |Developer Slack |
Note
For Intel Gaudi specific setup instructions and examples, please referIntel® Gaudi® README. For jupyter notebook based quickstart tutorials referGetting Started with vLLM andUnderstanding vLLM on Gaudi.
We are excited to invite you to our Menlo Park meetup with Meta, evening of Thursday, February 27! Meta engineers will discuss the improvements on top of vLLM, and vLLM contributors will share updates from the v0.7.x series of releases.Register Now
Latest News 🔥
- [2025/01] We are excited to announce the alpha release of vLLM V1: A major architectural upgrade with 1.7x speedup! Clean code, optimized execution loop, zero-overhead prefix caching, enhanced multimodal support, and more. Please check out our blog posthere.
- [2025/01] We hostedthe eighth vLLM meetup with Google Cloud! Please find the meetup slides from vLLM teamhere, and Google Cloud teamhere.
- [2024/12] vLLM joinspytorch ecosystem! Easy, Fast, and Cheap LLM Serving for Everyone!
- [2024/11] We hostedthe seventh vLLM meetup with Snowflake! Please find the meetup slides from vLLM teamhere, and Snowflake teamhere.
- [2024/10] We have just created a developer slack (slack.vllm.ai) focusing on coordinating contributions and discussing features. Please feel free to join us there!
- [2024/10] Ray Summit 2024 held a special track for vLLM! Please find the opening talk slides from the vLLM teamhere. Learn more from thetalks from other vLLM contributors and users!
- [2024/09] We hostedthe sixth vLLM meetup with NVIDIA! Please find the meetup slideshere.
- [2024/07] We hostedthe fifth vLLM meetup with AWS! Please find the meetup slideshere.
- [2024/07] In partnership with Meta, vLLM officially supports Llama 3.1 with FP8 quantization and pipeline parallelism! Please check out our blog posthere.
- [2024/06] We hostedthe fourth vLLM meetup with Cloudflare and BentoML! Please find the meetup slideshere.
- [2024/05]vLLM-fork specific: Added Intel® Gaudi® 2 support with SynapseAI 1.16.0. For more information, please refer toIntel® Gaudi® README.
- [2024/04] We hostedthe third vLLM meetup with Roblox! Please find the meetup slideshere.
- [2024/01] We hostedthe second vLLM meetup with IBM! Please find the meetup slideshere.
- [2023/10] We hostedthe first vLLM meetup with a16z! Please find the meetup slideshere.
- [2023/08] We would like to express our sincere gratitude toAndreessen Horowitz (a16z) for providing a generous grant to support the open-source development and research of vLLM.
- [2023/06] We officially released vLLM! FastChat-vLLM integration has poweredLMSYS Vicuna and Chatbot Arena since mid-April. Check out ourblog post.
vLLM is a fast and easy-to-use library for LLM inference and serving.
Originally developed in theSky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.
vLLM is fast with:
- State-of-the-art serving throughput
- Efficient management of attention key and value memory withPagedAttention
- Continuous batching of incoming requests
- Fast model execution with CUDA/HIP graph
- Quantizations:GPTQ,AWQ, INT4, INT8, and FP8.
- Optimized CUDA kernels, including integration with FlashAttention and FlashInfer.
- Speculative decoding
- Chunked prefill
Performance benchmark: We include a performance benchmark at the end ofour blog post. It compares the performance of vLLM against other LLM serving engines (TensorRT-LLM,SGLang andLMDeploy). The implementation is undernightly-benchmarks folder and you canreproduce this benchmark using our one-click runnable script.
vLLM is flexible and easy to use with:
- Seamless integration with popular Hugging Face models
- High-throughput serving with various decoding algorithms, includingparallel sampling,beam search, and more
- Tensor parallelism and pipeline parallelism support for distributed inference
- Streaming outputs
- OpenAI-compatible API server
- Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, TPU, and AWS Neuron.
- Prefix caching support
- Multi-lora support
vLLM seamlessly supports most popular open-source models on HuggingFace, including:
- Transformer-like LLMs (e.g., Llama)
- Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
- Embedding Models (e.g. E5-Mistral)
- Multi-modal LLMs (e.g., LLaVA)
Find the full list of supported modelshere.
Install vLLM withpip
orfrom source:
pip install vllm
Visit ourdocumentation to learn more.
We welcome and value any contributions and collaborations.Please check outCONTRIBUTING.md for how to get involved.
vLLM is a community project. Our compute resources for development and testing are supported by the following organizations. Thank you for your support!
Cash Donations:
- a16z
- Dropbox
- Sequoia Capital
- Skywork AI
- ZhenFund
Compute Resources:
- AMD
- Anyscale
- AWS
- Crusoe Cloud
- Databricks
- DeepInfra
- Google Cloud
- Lambda Lab
- Nebius
- Novita AI
- NVIDIA
- Replicate
- Roblox
- RunPod
- Trainy
- UC Berkeley
- UC San Diego
Slack Sponsor: Anyscale
We also have an official fundraising venue throughOpenCollective. We plan to use the fund to support the development, maintenance, and adoption of vLLM.
If you use vLLM for your research, please cite ourpaper:
@inproceedings{kwon2023efficient,title={Efficient Memory Management for Large Language Model Serving with PagedAttention},author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},year={2023}}
- For technical questions and feature requests, please use Github issues or discussions.
- For discussing with fellow users and coordinating contributions and development, please use Slack.
- For security disclosures, please use Github's security advisory feature.
- For collaborations and partnerships, please contact us at vllm-questions AT lists.berkeley.edu.
- If you wish to use vLLM's logo, please refer toour media kit repo.
About
A high-throughput and memory-efficient inference and serving engine for LLMs
Resources
License
Code of conduct
Security policy
Stars
Watchers
Forks
Packages0
Languages
- Python85.2%
- Cuda9.8%
- C++3.3%
- C0.7%
- Shell0.6%
- CMake0.3%
- Dockerfile0.1%