Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Fine tune Phi 2 for persona grounded chat

License

NotificationsYou must be signed in to change notification settings

alaradirik/finetune-phi-2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This project is a tutorial on parameter-efficient fine-tuning (PEFT) and quantization of thePhi 2 model for persona-grounded chat. We use LoRA for PEFT and 4-bit quantization to compress the model, and fine-tune the model on a new persona-based chat dataset -nazlicanto/persona-based-chat. Refer to the dataset page and Hugging Face modelpage for additional details.

Usage

Start by cloning the repository, setting up a conda environment and installing the dependencies. We tested our scripts with python 3.9 and CUDA 11.7.

git clone https://github.com/alaradirik/finetune-phi-2.gitcd finetune-phi-2conda create -n llm python=3.9conda activate llmpip install -r requirements.txt

You can finetune the model on chatdataset or another dataset. Note that you will need to have the same features as our dataset and pass in your HF Hub token as an argument if using a private dataset. Fine-tuning takes about 9 hours on a single A40, you can either use the default accelerate settings or configure it to use multiple GPUs. To fine-tune the model:

accelerate config defaultpython finetune_phi.py --dataset=<HF_DATASET_ID_OR_PATH> --base_model="microsoft/phi-2" --model_name=<YOUR_MODEL_NAME> --auth_token=<HF_AUTH_TOKEN> --push_to_hub

One model training is completed, only the fine-tuned (LoRA) parameters are saved, which are loaded to overwrite the corresponding parameters of the base model during testing. To test the fine-tuned model with a random sample selected from the dataset, runpython test.py.

License

The code and trained model are licensed under theMIT license.

About

Fine tune Phi 2 for persona grounded chat

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp