Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs. This work has been accepted by KDD 2024.

License

NotificationsYou must be signed in to change notification settings

codefuse-ai/MFTCoder

Repository files navigation

starsforksLicense: MITOpen Issues

[中文] [English]

Contents

News

🔥🔥 [2023/11/07]MFTCoder Paper has been released on Arxiv, which discloses technique details of multi-task-fine-tuning.

🔥🔥 [2023/10/20]CodeFuse-QWen-14B has been released, achieving a pass@1 (greedy decoding) score of 48.8% on HumanEval, which gains 16% absolute improvement over the base modelQwen-14b

🔥🔥 [2023/09/27]CodeFuse-StarCoder-15B has been released, achieving a pass@1 (greedy decoding) score of 54.9% on HumanEval.

🔥🔥🔥 [2023/09/26]We are pleased to announce the release of the4-bit quantized version of CodeFuse-CodeLlama-34B. Despite the quantization process, the model still achieves a remarkable 73.8% accuracy (greedy decoding) on the HumanEval pass@1 metric.

🔥🔥🔥 [2023/09/07]We releasedCodeFuse-CodeLlama-34B, which achieves the74.4% Python Pass@1 (greedy decoding) and surpasses GPT4 (2023/03/15) and ChatGPT-3.5 on theHumanEval Benchmarks.

🔥🔥 [2023/08/26]We released MFTCoder which supports finetuning Code Llama, Llama, Llama2, StarCoder, ChatGLM2, CodeGeeX2, Qwen, and GPT-NeoX models with LoRA/QLoRA.

HumanEval Performance

ModelHumanEval(Pass@1)Date
CodeFuse-CodeLlama-34B74.4%2023/09
CodeFuse-CodeLlama-34B-4bits73.8%2023/09
WizardCoder-Python-34B-V1.073.2%2023/08
GPT-4(zero-shot)67.0%2023/03
PanGu-Coder2 15B61.6%2023/08
CodeFuse-StarCoder-15B54.9%2023/08
CodeLlama-34b-Python53.7%2023/08
CodeFuse-QWen-14B48.8%2023/10
CodeLlama-34b48.8%2023/08
GPT-3.5(zero-shot)48.1%2022/11
OctoCoder46.2%2023/08
StarCoder-15B33.6%2023/05
QWen-14B32.3%2023/10

Articles

MFT Arxiv paper

Introduction

High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs.

CodeFuse-MFTCoder is an open-source project of CodeFuse for multitasking Code-LLMs(large language model for code tasks), which includes models, datasets, training codebases and inference guides.In MFTCoder, we released two codebases for finetuning Large Language Models:

  • mft_peft_hf is based on the HuggingFace Accelerate and deepspeed framework.
  • mft_atorch is based on theATorch frameworks, which is a fast distributed training framework of LLM.

The aim of this project is to foster collaboration and share advancements in large language models, particularly within the domain of code development.

Frameworks

img.png

Highlights

Multi-task: Train models on multiple tasks while maintaining a balance between them. The models can even generalize to new, previously unseen tasks.

Multi-model: It integrates state-of-the-art open-source models such as gpt-neox, llama, llama-2, baichuan, Qwen, chatglm2, and more. (These finetuned models will be released in the near future.)

Multi-framework: It provides support for both HuggingFace Accelerate (with deepspeed) andATorch.

Efficient fine-tuning: It supports LoRA and QLoRA, enabling fine-tuning of large models with minimal resources. The training speed meets the demands of almost all fine-tuning scenarios.

The main components of this project include:

  • Support for both SFT (Supervised FineTuning) and MFT (Multi-task FineTuning). The current MFTCoder achieves data balance among multiple tasks, and future releases will achieve a balance between task difficulty and convergence speed during training.
  • Support for QLoRA instruction fine-tuning, as well as LoRA fine-tuning.
  • Support for most mainstream open-source large models, particularly those relevant to Code-LLMs, such as Code-LLaMA, Starcoder, Codegeex2, Qwen, GPT-Neox, and more.
  • Support for weight merging between the LoRA adaptor and base models, simplifying the inference process.
  • Release of 2 high-quality code-related instruction fine-tuning datasets:Evol-instruction-66k andCodeExercise-Python-27k.
  • Release of 2 models:CodeFuse-13B andCodeFuse-CodeLlama-34B.

Requirements

To begin, ensure that you have successfully installed CUDA (version >= 11.4, preferably 11.7) along with the necessary drivers. Additionally, make sure you have installed torch (version 2.0.1).

Next, we have provided an init_env.sh script to simplify the installation of required packages. Execute the following command to run the script:

sh init_env.sh

If you require flash attention, please refer to the following link for installation instructions:https://github.com/Dao-AILab/flash-attention

Training

🚀Huggingface accelerate + deepspeed Codebase for MFT(Multi-task Finetuning)

🚀Atorch Codebase for MFT(Multi-task Finetuning)

Models

We are excited to release the following two CodeLLMs trained by MFTCoder, now available on Hugging Face:

ModelBase ModelNum of examples trainedBatch SizeSeq Length
🔥🔥🔥 CodeFuse-CodeLlama-34BCodeLlama-34b-Python600k804096
🔥🔥🔥 CodeFuse-CodeLlama-34B-4bitsCodeLlama-34b-Python4096
🔥🔥🔥 CodeFuse-StarCoder-15BStarcoder600k2564096
🔥🔥🔥 CodeFuse-QWen-14BQwen-14b1100k2564096
🔥 CodeFuse-13BCodeFuse-13B66k644096

Datasets

We are also pleased to release two code-related instruction datasets, meticulously selected from a range of datasets to facilitate multitask training. Moving forward, we are committed to releasing additional instruction datasets covering various code-related tasks.

DatasetDescription
⭐ Evol-instruction-66kBased on open-evol-instruction-80k, filter out low-quality, repeated, and similar instructions to HumanEval, thus get high-quality code instruction dataset.
⭐ CodeExercise-Python-27kpython code exercise instruction dataset

Contributing

Contributions are welcome! If you have any suggestions, ideas, bug reports, or new model/feature supported, please open an issue or submit a pull request.

Citation

If you find our work useful or helpful for your R&D works, please feel free to cite our paper as below.

@article{mftcoder2023,      title={MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning},       author={Bingchang Liu and Chaoyu Chen and Cong Liao and Zi Gong and Huan Wang and Zhichao Lei and Ming Liang and Dajun Chen and Min Shen and Hailian Zhou and Hang Yu and Jianguo Li},      year={2023},      journal={arXiv preprint arXiv},      archivePrefix={arXiv},      eprint={2311.02303}}

Star-History

Star History Chart

About

High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs. This work has been accepted by KDD 2024.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors9

Languages


[8]ページ先頭

©2009-2025 Movatter.jp