Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).

License

NotificationsYou must be signed in to change notification settings

inferflow/inferflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VersionStarsIssues

Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).With Inferflow, users can serve most of the common transformer models by simply modifying some lines in corresponding configuration files,without writing a single line of source code. Further details can be found in ourtechnical report.

Quick Links

  1. Getting started (on Windows |on Linux, Mac, and Windows Subsystem for Linux (WSL))
  2. Serving 34B or 40B models on a single 24GB-VRAM GPU (e.g., RTX 3090 and 4090)

Milestones

  • 2024-2-18: Added support for mixture-of-experts (MoE) models.
  • 2024-1-17: Version 0.1.0 was formally released.

Main Features

  1. Extensible and highly configurable: A typical way of using Inferflow to serve a new model is editing a model specification file, but not adding/editing source codes. We implement in Inferflow a modular framework of atomic building-blocks and technologies, making it compositionally generalizable to new models. A new model can be served by Inferflow if the atomic building-blocks and technologies in this model have been "known" (to Inferflow).
  2. 3.5-bit quantization: Inferflow implements 2-bit, 3-bit, 3.5-bit, 4-bit, 5-bit, 6-bit and 8-bit quantization. Among the quantization schemes, 3.5-bit quantization is a new one introduced by Inferflow.
  3. Hybrid model partition for multi-GPU inference: Inferflow supports multi-GPU inference with three model partitioning strategies to choose from: partition-by-layer (pipeline parallelism), partition-by-tensor (tensor parallelism), and hybrid partitioning (hybrid parallelism). Hybrid partitioning is seldom supported by other inference engines.
  4. Wide file format support (and safely loading pickle data): Inferflow supports loading models of multiple file formats directly, without reliance on an external converter. Supported formats include pickle, safetensors, llama.cpp gguf, etc. It is known that there are security issues to read pickle files using Python codes. By implementing a simplified pickle parser in C++, Inferflow supports safely loading models from pickle data.
  5. Wide network type support: Supporting three types transformer models: decoder-only models, encoder-only models, and encoder-decoder models.
  6. GPU/CPU hybrid inference: Supporting GPU-only, CPU-only, and GPU/CPU hybrid inference.

Below is a comparison between Inferflow and some other inference engines:

Inference EngineNew Model SupportSupported File FormatsNetwork StructuresQuantization BitsHybrid Parallelism for Multi-GPU InferenceProgramming Languages
Huggingface TransformersAdding/editing source codespickle (unsafe), safetensorsdecoder-only, encoder-decoder, encoder-only4b, 8bPython
vLLMAdding/editing source codespickle (unsafe), safetensorsdecoder-only4b, 8bPython
TensorRT-LLMAdding/editing source codesdecoder-only, encoder-decoder, encoder-only4b, 8bC++, Python
DeepSpeed-MIIAdding/editing source codespickle (unsafe), safetensorsdecoder-only-Python
llama.cppAdding/editing source codesggufdecoder-only2b, 3b, 4b, 5b, 6b, 8bC/C++
llama2.cAdding/editing source codesllama2.cdecoder-only                       -C
LMDeployAdding/editing source codespickle (unsafe), TurboMinddecoder-only4b, 8bC++, Python
InferflowEditing configuration filespickle (safe), safetensors, gguf, llama2.cdecoder-only, encoder-decoder, encoder-only2b, 3b,3.5b, 4b, 5b, 6b, 8b     ✔     C++

Support Matrix

Supported Model File Formats

  • Pickle (Inferflow reduces the security issue of most other inference engines in loading pickle-format files).
  • Safetensors
  • llama.cpp gguf
  • llama2.c

Supported Technologies, Modules, and Options

  • Supported modules and technologies related to model definition:

    • Normalization functions: STD, RMS
    • Activation functions: RELU, GELU, SILU
    • Position embeddings: ALIBI, RoPE, Sinusoidal
    • Grouped-query attention
    • Parallel attention
  • Supported technologies and options related to serving:

    • Linear quantization of weights and KV cache elements: 2-bit, 3b, 3.5b, 4b, 5b, 6b, 8b
    • The option of moving part of all of the KV cache from VRAM to regular RAM
    • The option of placing the input embedding tensor(s) to regular RAM
    • Model partitioning strategies for multi-GPU inference: partition-by-layer, partition-by-tensor, hybrid partitioning
    • Dynamic batching
    • Decoding strategies: Greedy, top-k, top-p, FSD, typical, mirostat...

Supported Transformer Models

  • Decoder-only: Inferflow supports many types of decoder-only transformer models.
  • Encoder-decoder: Some types of encoder-decoder models are supported.
  • Encoder-only: Some types of encoder-only models are supported.

Models with Predefined Specification Files

Users can serve a model with Inferflow by editing a model specification file. We have builtpredefined specification files for some popular or representative models. Below is a list of such models.

  • Aquila (aquila_chat2_34b)
  • Baichuan (baichuan2_7b_chat, baichuan2_13b_chat)
  • BERT (bert-base-multilingual-cased)
  • Bloom (bloomz_3b)
  • ChatGLM (chatglm2_6b)
  • Deepseek (deepseek_moe_16b_base)
  • Facebook m2m100 (facebook_m2m100_418m)
  • Falcon (falcon_7b_instruct, falcon_40b_instruct)
  • FuseLLM (fusellm_7b)
  • Gemma (gemma_2b_it)
  • Internlm (internlm-chat-20b)
  • LLAMA2 (llama2_7b, llama2_7b_chat, llama2_13b_chat)
  • MiniCPM (minicpm_2b_dpo_bf16)
  • Mistral (mistral_7b_instruct)
  • Mixtral (mixtral_8x7b_instruct_v0.1)
  • Open LLAMA (open_llama_3b)
  • OPT (opt_350m, opt_13b, opt_iml_max_30b)
  • Orion (orion_14b_chat)
  • Phi-2 (phi_2)
  • Qwen (qwen1.5_7b_chat)
  • XVERSE (xverse_13b_chat)
  • YI (yi_6b, yi_34b_chat)

Getting Started

Windows users: Please refer todocs/getting_started.win.md for the instructions about building and running the Inferflow tools and service on Windows.

The following instructions are forLinux,Mac, andWSL (Windows Subsystem for Linux).

Get the Code

git clone https://github.com/inferflow/inferflowcd inferflow

Build

  • Build the GPU version (that supports GPU/CPU hybrid inference):

    mkdir build/gpucd build/gpucmake ../.. -DUSE_CUDA=1 -DCMAKE_CUDA_ARCHITECTURES=75make install -j 8
  • Build the CPU-only version:

    mkdir build/cpucd build/cpucmake ../.. -DUSE_CUDA=0make install -j 8

Upon a successful build, executables are generated and copied tobin/release/

Run the LLM Inferencing Tool (bin/llm_inference)

  • Example-1: Load a tiny model and perform inference

    • Step-1: Download the model

      #> cd {inferflow-root-dir}/data/models/llama2.c/#> bash download.sh

      Instead of running the above batch script, you can also manually download the model files and copy them to the above folder. The source URL and file names can be found from download.sh.

    • Step-2: Run thellm_inference tool:

      #> cd {inferflow-root-dir}/bin/#> release/llm_inference llm_inference.tiny.ini

      Please note that it is okay forllm_inference andllm_inference.tiny.ini not being in the same folder (llm_inference.tiny.ini is in bin/ and llm_inference is in bin/release/).

  • Example-2: Run thellm_inference tool to load a larger model for inference

    • Step-1: Edit configuration filebin/inferflow_service.ini to choose a model.

      In the "transformer_engine" section of bin/inferflow_service.ini, there are multiple lines starting with "models =" or ";models =".The lines starting with the ";" character are comments.To choose a model for inference, please uncomment the line corresponding to this model, and comment the lines of other models.By default, thephi-2 model is selected.Please refer todocs/model_serving_config.md for more information about editing the configuration of inferflow_service.

    • Step-2: Download the selected model

      #> cd {inferflow-root-dir}/data/models/{model-name}/#> bash download.sh
    • Step-3: Edit configuration filebin/llm_inference.ini to choose or edit a query.

      In the configuration file, queries are organized into query lists. A query list can contain one or multiple queries.Different query lists are for different purposes. For example,query_list.decoder_only is for testing decoder-only models. Its detailed information can be configured in thequery_list.decoder_only section.The starting line of this section is "query_count = 1", which means only one query is included in this query list.Among the following lines with keyquery1, only one line is uncommented and therefore effective, whereas other lines (i.e., the lines starting with a ";" character) are commented.You can choose a query for testing by uncommenting this query and commenting all the other queries. You can, of course, add new queries or change the content of an existing query.

    • Step-4: Run the tool:

      #> cd {inferflow-root-dir}/bin/#> release/llm_inference

Run the Inferflow Service (bin/inferflow_service)

  • Step-1: Edit the service configuration file (bin/inferflow_service.ini)

  • Step-2: Start the service:

    #> cd bin#> release/inferflow_service

Test the Inferflow service

Run an HTTP client, to interact with the Inferflow service via the HTTP protocol to get inference results.

  • Option-1. Run the Inferflow client tool: inferflow_client

    • Step-1: Edit the configuration file (bin/inferflow_client.ini) to set the service address, query text, and options.

    • Step-2: Run the client tool to get inference results.

    #> cd bin#> release/inferflow_client
  • Option-2 The CURL command

    You can also use the CURL command to send a HTTP POST request to the Inferflow service and get inference results. Below is an example:

    curl -X POST -d '{"text": "Write an article about the weather of Seattle.", "res_prefix": "", "decoding_alg": "sample.top_p", "random_seed": 1, "temperature": 0.7, "is_streaming_mode": false}' localhost:8080
  • Option-3. Use GUI REST client (e.g., the Chrome extension ofTabbed Postman).

    • URL:http://localhost:8080 (If you access the service from a different machine, please replace "localhost" with the service IP)

    • HTTP method:POST

    • Example body text:{"text": "Write an article about the weather of Seattle.", "res_prefix": "", "decoding_alg": "sample.top_p", "random_seed": 1, "temperature": 0.7, "is_streaming_mode": 0}

Compatibility with OpenAI's Chat Completions API

The Inferflow service also provides support forOpenAI's Chat Completions API.The API can be tested in one of the following ways.

  • Option-1: The OpenAI Python API Library

    Below are the sample codes. Please first install the openai Python module (pip install openai) before running the following codes.

    importopenaiopenai.base_url="http://localhost:8080"openai.api_key="sk-no-key-required"is_streaming=Trueresponse=openai.chat.completions.create(model="default",messages=[        {"role":"system","content":"You are a helpful assistant."},        {"role":"user","content":"Write an article about the weather of Seattle."}    ],stream=is_streaming)ifis_streaming:forchunkinresponse:print(chunk.choices[0].delta.contentor"",end="")else:print(response.choices[0].message.content)
  • Option-2: The CURL command

    curl -X post -d'{"model": "gpt-3.5-turbo","messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Write an article about the weather of Seattle."}], "stream": true}' http://localhost:8080/chat/completions

Reference

If you are interested in our work, please kindly cite:

@misc{shi2024inferflow,title={Inferflow: an Efficient and Highly Configurable Inference Engine for Large Language Models},author={Shuming Shi and Enbo Zhao and Deng Cai and Leyang Cui and Xinting Huang and Huayang Li},year={2024},eprint={2401.08294},archivePrefix={arXiv},primaryClass={cs.CL}}

Acknowledgements

Inferflow is inspired by the awesome projects ofllama.cpp andllama2.c. The CPU inference part of Inferflow is based on theggml library. The FP16 data type in the CPU-only version of Inferflow is from theHalf-precision floating-point library. We express our sincere gratitude to the maintainers and implementers of these source codes and tools.


[8]ページ先頭

©2009-2025 Movatter.jp