Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 350M parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.

License

NotificationsYou must be signed in to change notification settings

ryan-air/Alpaca-350M-Fine-Tuned

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Professional work-related project

In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 350M parameter model originally developed atStanford University. The particularmodel that is being fine-tuned has around 350 million parameters, which is one of the smaller Alpaca models (smaller than my previous fine-tuned model).

The model uses low-rank adaptationLoRA to run with fewer computational resources and training parameters. We usebitsandbytes to set up and run in an 8-bit format so it can be used on colaboratory. Furthermore, thePEFT library from HuggingFace was used for fine-tuning the model.

Hyper Parameters:

  1. MICRO_BATCH_SIZE = 4 (4 works with a smaller GPU)
  2. BATCH_SIZE = 32
  3. GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE
  4. EPOCHS = 2 (Stanford's Alpaca uses 3)
  5. LEARNING_RATE = 2e-5 (Stanford's Alpaca uses 2e-5)
  6. CUTOFF_LEN = 256 (Stanford's Alpaca uses 512, but 256 accounts for 96% of the data and runs far quicker)
  7. LORA_R = 4
  8. LORA_ALPHA = 16
  9. LORA_DROPOUT = 0.05

Credit for Original Model:Qiyuan Ge

Fine-Tuned Model:RyanAir/Alpaca-350M-Fine-Tuned (HuggingFace)

About

In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 350M parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp