Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)

License

NotificationsYou must be signed in to change notification settings

mzr1996/xtuner

 
 

Repository files navigation



GitHub Repo starslicensePyPIDownloadsissue resolutionopen issues

👋 join us onStatic BadgeStatic BadgeStatic Badge

🔍 Explore our models onStatic BadgeStatic BadgeStatic BadgeStatic Badge

English |简体中文

🚀 Speed Benchmark

  • Llama2 7B Training Speed
  • Llama2 70B Training Speed

🎉 News

  • [2024/04]LLaVA-Phi-3-mini is released! Clickhere for details!
  • [2024/04]LLaVA-Llama-3-8B andLLaVA-Llama-3-8B-v1.1 are released! Clickhere for details!
  • [2024/04] SupportLlama 3 models!
  • [2024/04] Support Sequence Parallel for enabling highly efficient and scalable LLM training with extremely long sequence lengths! [Usage] [Speed Benchmark]
  • [2024/02] SupportGemma models!
  • [2024/02] SupportQwen1.5 models!
  • [2024/01] SupportInternLM2 models! The latest VLMLLaVA-Internlm2-7B /20B models are released, with impressive performance!
  • [2024/01] SupportDeepSeek-MoE models! 20GB GPU memory is enough for QLoRA fine-tuning, and 4x80GB for full-parameter fine-tuning. Clickhere for details!
  • [2023/12] 🔥 Support multi-modal VLM pretraining and fine-tuning withLLaVA-v1.5 architecture! Clickhere for details!
  • [2023/12] 🔥 SupportMixtral 8x7B models! Clickhere for details!
  • [2023/11] SupportChatGLM3-6B model!
  • [2023/10] SupportMSAgent-Bench dataset, and the fine-tuned LLMs can be applied byLagent!
  • [2023/10] Optimize the data processing to accommodatesystem context. More information can be found onDocs!
  • [2023/09] SupportInternLM-20B models!
  • [2023/09] SupportBaichuan2 models!
  • [2023/08] XTuner is released, with multiple fine-tuned adapters onHugging Face.

📖 Introduction

XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large models.

Efficient

  • Support LLM, VLM pre-training / fine-tuning on almost all GPUs. XTuner is capable of fine-tuning 7B LLM on a single 8GB GPU, as well as multi-node fine-tuning of models exceeding 70B.
  • Automatically dispatch high-performance operators such as FlashAttention and Triton kernels to increase training throughput.
  • Compatible withDeepSpeed 🚀, easily utilizing a variety of ZeRO optimization techniques.

Flexible

  • Support various LLMs (InternLM,Mixtral-8x7B,Llama 2,ChatGLM,Qwen,Baichuan, ...).
  • Support VLM (LLaVA). The performance ofLLaVA-InternLM2-20B is outstanding.
  • Well-designed data pipeline, accommodating datasets in any format, including but not limited to open-source and custom formats.
  • Support various training algorithms (QLoRA,LoRA, full-parameter fune-tune), allowing users to choose the most suitable solution for their requirements.

Full-featured

  • Support continuous pre-training, instruction fine-tuning, and agent fine-tuning.
  • Support chatting with large models with pre-defined templates.
  • The output models can seamlessly integrate with deployment and server toolkit (LMDeploy), and large-scale evaluation toolkit (OpenCompass,VLMEvalKit).

🔥 Supports

ModelsSFT DatasetsData PipelinesAlgorithms

🛠️ Quick Start

Installation

  • It is recommended to build a Python-3.10 virtual environment using conda

    conda create --name xtuner-env python=3.10 -yconda activate xtuner-env
  • Install XTuner via pip

    pip install -U xtuner

    or with DeepSpeed integration

    pip install -U'xtuner[deepspeed]'
  • Install XTuner from source

    git clone https://github.com/InternLM/xtuner.gitcd xtunerpip install -e'.[all]'

Fine-tune

XTuner supports the efficient fine-tune (e.g., QLoRA) for LLMs. Dataset prepare guides can be found ondataset_prepare.md.

  • Step 0, prepare the config. XTuner provides many ready-to-use configs and we can view all configs by

    xtuner list-cfg

    Or, if the provided configs cannot meet the requirements, please copy the provided config to the specified directory and make specific modifications by

    xtuner copy-cfg${CONFIG_NAME}${SAVE_PATH}vi${SAVE_PATH}/${CONFIG_NAME}_copy.py
  • Step 1, start fine-tuning.

    xtuner train${CONFIG_NAME_OR_PATH}

    For example, we can start the QLoRA fine-tuning of InternLM2-Chat-7B with oasst1 dataset by

    # On a single GPUxtuner train internlm2_chat_7b_qlora_oasst1_e3 --deepspeed deepspeed_zero2# On multiple GPUs(DIST) NPROC_PER_NODE=${GPU_NUM} xtuner train internlm2_chat_7b_qlora_oasst1_e3 --deepspeed deepspeed_zero2(SLURM) srun${SRUN_ARGS} xtuner train internlm2_chat_7b_qlora_oasst1_e3 --launcher slurm --deepspeed deepspeed_zero2
    • --deepspeed means usingDeepSpeed 🚀 to optimize the training. XTuner comes with several integrated strategies including ZeRO-1, ZeRO-2, and ZeRO-3. If you wish to disable this feature, simply remove this argument.

    • For more examples, please seefinetune.md.

  • Step 2, convert the saved PTH model (if using DeepSpeed, it will be a directory) to Hugging Face model, by

    xtuner convert pth_to_hf${CONFIG_NAME_OR_PATH}${PTH}${SAVE_PATH}

Chat

XTuner provides tools to chat with pretrained / fine-tuned LLMs.

xtuner chat${NAME_OR_PATH_TO_LLM} --adapter {NAME_OR_PATH_TO_ADAPTER} [optional arguments]

For example, we can start the chat with

InternLM2-Chat-7B with adapter trained from oasst1 dataset:

xtuner chat internlm/internlm2-chat-7b --adapter xtuner/internlm2-chat-7b-qlora-oasst1 --prompt-template internlm2_chat

LLaVA-InternLM2-7B:

xtuner chat internlm/internlm2-chat-7b --visual-encoder openai/clip-vit-large-patch14-336 --llava xtuner/llava-internlm2-7b --prompt-template internlm2_chat --image$IMAGE_PATH

For more examples, please seechat.md.

Deployment

  • Step 0, merge the Hugging Face adapter to pretrained LLM, by

    xtuner convert merge \${NAME_OR_PATH_TO_LLM} \${NAME_OR_PATH_TO_ADAPTER} \${SAVE_PATH} \    --max-shard-size 2GB
  • Step 1, deploy fine-tuned LLM with any other framework, such asLMDeploy 🚀.

    pip install lmdeploypython -m lmdeploy.pytorch.chat${NAME_OR_PATH_TO_LLM} \    --max_new_tokens 256 \    --temperture 0.8 \    --top_p 0.95 \    --seed 0

    🔥 Seeking efficient inference with less GPU memory? Try 4-bit quantization fromLMDeploy! For more details, seehere.

Evaluation

  • We recommend usingOpenCompass, a comprehensive and systematic LLM evaluation library, which currently supports 50+ datasets with about 300,000 questions.

🤝 Contributing

We appreciate all contributions to XTuner. Please refer toCONTRIBUTING.md for the contributing guideline.

🎖️ Acknowledgement

🖊️ Citation

@misc{2023xtuner,title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},author={XTuner Contributors},howpublished ={\url{https://github.com/InternLM/xtuner}},year={2023}}

License

This project is released under theApache License 2.0. Please also adhere to the Licenses of models and datasets being used.

About

An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python100.0%

[8]ページ先頭

©2009-2025 Movatter.jp