- Notifications
You must be signed in to change notification settings - Fork31
minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.
License
NotificationsYou must be signed in to change notification settings
cccntu/minLoRA
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
A minimal, but versatile PyTorch re-implementation ofLoRA. In only ~100 lines of code, minLoRA supports the following features:
- Functional, no need to modify the model definition
- Works everywhere, as long as you use
torch.nn.Module
- PyTorch native, uses PyTorch's
torch.nn.utils.parametrize
to do all the heavy lifting - Easily extendable, you can add your own LoRA parameterization
- Supports training, inference, and inference with multiple LoRA models
demo.ipynb
shows the basic usage of the libraryadvanced_usage.ipynb
shows how you can add LoRA to other layers such as embedding, and how to tie weights
- Finetuning GPT using LoRA + nanoGPT:https://github.com/cccntu/LoRAnanoGPT/pull/1/files
If you want toimport minlora
into your project:
git clone https://github.com/cccntu/minLoRA.gitcd minLoRApip install -e .
importtorchfromminloraimportadd_lora,apply_to_lora,disable_lora,enable_lora,get_lora_params,merge_lora,name_is_lora,remove_lora,load_multiple_lora,select_lora
model=torch.nn.Linear(in_features=5,out_features=3)# Step 1: Add LoRA to the modeladd_lora(model)# Step 2: Collect the parameters, pass them to the optimizerparameters= [ {"params":list(get_lora_params(model))},]optimizer=torch.optim.AdamW(parameters,lr=1e-3)# Step 3: Train the model# ...# Step 4: export the LoRA parameterslora_state_dict=get_lora_state_dict(model)
# Step 1: Add LoRA to your modeladd_lora(model)# Step 2: Load the LoRA parameters_=model.load_state_dict(lora_state_dict,strict=False)# Step 3: Merge the LoRA parameters into the modelmerge_lora(model)
# to avoid re-adding lora to the model when rerun the cell, remove lora firstremove_lora(model)# Step 1: Add LoRA to your modeladd_lora(model)# Step 2: Load the LoRA parameters# load three sets of LoRA parameterslora_state_dicts= [lora_state_dict_0,lora_state_dict_1,lora_state_dict_2]load_multiple_lora(model,lora_state_dicts)# Step 3: Select which LoRA to use at inference timeY0=select_lora(model,0)(x)Y1=select_lora(model,1)(x)Y2=select_lora(model,2)(x)
- microsoft/LoRA has the official implementation of LoRA, in PyTorch
- karpathy/minGPT the structure of the repo is adapted from minGPT
- A notebook to show how to configure LoRA parameters
- Real training & inference examples