Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.

License

NotificationsYou must be signed in to change notification settings

cccntu/minLoRA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A minimal, but versatile PyTorch re-implementation ofLoRA. In only ~100 lines of code, minLoRA supports the following features:

Features

  • Functional, no need to modify the model definition
  • Works everywhere, as long as you usetorch.nn.Module
  • PyTorch native, uses PyTorch'storch.nn.utils.parametrize to do all the heavy lifting
  • Easily extendable, you can add your own LoRA parameterization
  • Supports training, inference, and inference with multiple LoRA models

Demo

  • demo.ipynb shows the basic usage of the library
  • advanced_usage.ipynb shows how you can add LoRA to other layers such as embedding, and how to tie weights

Examples

Library Installation

If you want toimport minlora into your project:

git clone https://github.com/cccntu/minLoRA.gitcd minLoRApip install -e .

Usage

importtorchfromminloraimportadd_lora,apply_to_lora,disable_lora,enable_lora,get_lora_params,merge_lora,name_is_lora,remove_lora,load_multiple_lora,select_lora

Training a model with minLoRA

model=torch.nn.Linear(in_features=5,out_features=3)# Step 1: Add LoRA to the modeladd_lora(model)# Step 2: Collect the parameters, pass them to the optimizerparameters= [    {"params":list(get_lora_params(model))},]optimizer=torch.optim.AdamW(parameters,lr=1e-3)# Step 3: Train the model# ...# Step 4: export the LoRA parameterslora_state_dict=get_lora_state_dict(model)

Loading and Inferencing with minLoRA

# Step 1: Add LoRA to your modeladd_lora(model)# Step 2: Load the LoRA parameters_=model.load_state_dict(lora_state_dict,strict=False)# Step 3: Merge the LoRA parameters into the modelmerge_lora(model)

Inferencing with multiple LoRA models

# to avoid re-adding lora to the model when rerun the cell, remove lora firstremove_lora(model)# Step 1: Add LoRA to your modeladd_lora(model)# Step 2: Load the LoRA parameters# load three sets of LoRA parameterslora_state_dicts= [lora_state_dict_0,lora_state_dict_1,lora_state_dict_2]load_multiple_lora(model,lora_state_dicts)# Step 3: Select which LoRA to use at inference timeY0=select_lora(model,0)(x)Y1=select_lora(model,1)(x)Y2=select_lora(model,2)(x)

References

TODO

  • A notebook to show how to configure LoRA parameters
  • Real training & inference examples

About

minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp