Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

License

NotificationsYou must be signed in to change notification settings

Lightning-AI/litgpt

Repository files navigation

20+ high-performance LLMs with recipes to pretrain, finetune, and deploy at scale.

✅ From scratch implementations      ✅ No abstractions         ✅ Beginner friendly   ✅ Flash attention                   ✅ FSDP                    ✅ LoRA, QLoRA, Adapter✅ Reduce GPU memory (fp4/8/16/32)   ✅ 1-1000+ GPUs/TPUs       ✅ 20+ LLMs

PyPI - Python Versioncpu-testslicenseDiscord

Quick startModelsFinetuneDeployAll workflowsFeaturesRecipes (YAML)Lightning AITutorials

 

Get started

 

Looking for GPUs?

Over 340,000 developers useLightning Cloud - purpose-built for PyTorch and PyTorch Lightning.

Finetune, pretrain, and inference LLMs Lightning fast ⚡⚡

Every LLM is implemented from scratch withno abstractions andfull control, making them blazing fast, minimal, and performant at enterprise scale.

Enterprise ready - Apache 2.0 for unlimited enterprise use.
Developer friendly - Easy debugging with no abstraction layers and single file implementations.
Optimized performance - Models designed to maximize performance, reduce costs, and speed up training.
Proven recipes - Highly-optimized training/finetuning recipes tested at enterprise scale.

 

Quick start

Install LitGPT

pip install 'litgpt[extra]'

Load and use any of the20+ LLMs:

fromlitgptimportLLMllm=LLM.load("microsoft/phi-2")text=llm.generate("Fix the spelling: Every fall, the family goes to the mountains.")print(text)# Corrected Sentence: Every fall, the family goes to the mountains.

 

✅ Optimized for fast inference
✅ Quantization
✅ Runs on low-memory GPUs
✅ No layers of internal abstractions
✅ Optimized for production scale

Advanced install options

Install from source:

git clone https://github.com/Lightning-AI/litgptcd litgptpip install -e'.[all]'

Explore the full Python API docs.

 


Choose from 20+ LLMs

Every model is written from scratch to maximize performance and remove layers of abstraction:

ModelModel sizeAuthorReference
Llama 3, 3.1, 3.2, 3.31B, 3B, 8B, 70B, 405BMeta AIMeta AI 2024
Code Llama7B, 13B, 34B, 70BMeta AIRozière et al. 2023
CodeGemma7BGoogleGoogle Team, Google Deepmind
Gemma 22B, 9B, 27BGoogleGoogle Team, Google Deepmind
Phi 414BMicrosoft ResearchAbdin et al. 2024
Qwen2.50.5B, 1.5B, 3B, 7B, 14B, 32B, 72BAlibaba GroupQwen Team 2024
Qwen2.5 Coder0.5B, 1.5B, 3B, 7B, 14B, 32BAlibaba GroupHui, Binyuan et al. 2024
R1 Distill Llama8B, 70BDeepSeek AIDeepSeek AI 2025
............
See full list of 20+ LLMs

 

All models

ModelModel sizeAuthorReference
CodeGemma7BGoogleGoogle Team, Google Deepmind
Code Llama7B, 13B, 34B, 70BMeta AIRozière et al. 2023
Falcon7B, 40B, 180BTII UAETII 2023
Falcon 31B, 3B, 7B, 10BTII UAETII 2024
FreeWilly2 (Stable Beluga 2)70BStability AIStability AI 2023
Function Calling Llama 27BTrelisTrelis et al. 2023
Gemma2B, 7BGoogleGoogle Team, Google Deepmind
Gemma 29B, 27BGoogleGoogle Team, Google Deepmind
Gemma 31B, 4B, 12B, 27BGoogleGoogle Team, Google Deepmind
Llama 27B, 13B, 70BMeta AITouvron et al. 2023
Llama 3.18B, 70BMeta AIMeta AI 2024
Llama 3.21B, 3BMeta AIMeta AI 2024
Llama 3.370BMeta AIMeta AI 2024
Mathstral7BMistral AIMistral AI 2024
MicroLlama300MKen WangMicroLlama repo
Mixtral MoE8x7BMistral AIMistral AI 2023
Mistral7B, 123BMistral AIMistral AI 2023
Mixtral MoE8x22BMistral AIMistral AI 2024
OLMo1B, 7BAllen Institute for AI (AI2)Groeneveld et al. 2024
OpenLLaMA3B, 7B, 13BOpenLM ResearchGeng & Liu 2023
Phi 1.5 & 21.3B, 2.7BMicrosoft ResearchLi et al. 2023
Phi 33.8BMicrosoft ResearchAbdin et al. 2024
Phi 414BMicrosoft ResearchAbdin et al. 2024
Phi 4 Mini Instruct3.8BMicrosoft ResearchMicrosoft 2025
Phi 4 Mini Reasoning3.8BMicrosoft ResearchXu, Peng et al. 2025
Phi 4 Reasoning3.8BMicrosoft ResearchAbdin et al. 2025
Phi 4 Reasoning Plus3.8BMicrosoft ResearchAbdin et al. 2025
Platypus7B, 13B, 70BLee et al.Lee, Hunter, and Ruiz 2023
Pythia{14,31,70,160,410}M, {1,1.4,2.8,6.9,12}BEleutherAIBiderman et al. 2023
Qwen2.50.5B, 1.5B, 3B, 7B, 14B, 32B, 72BAlibaba GroupQwen Team 2024
Qwen2.5 Coder0.5B, 1.5B, 3B, 7B, 14B, 32BAlibaba GroupHui, Binyuan et al. 2024
Qwen2.5 1M (Long Context)7B, 14BAlibaba GroupQwen Team 2025
Qwen2.5 Math1.5B, 7B, 72BAlibaba GroupAn, Yang et al. 2024
QwQ32BAlibaba GroupQwen Team 2025
QwQ-Preview32BAlibaba GroupQwen Team 2024
Qwen30.6B, 1.7B, 4B{Hybrid, Thinking-2507, Instruct-2507}, 8B, 14B, 32BAlibaba GroupQwen Team 2025
Qwen3 MoE30B{Hybrid, Thinking-2507, Instruct-2507}, 235B{Hybrid, Thinking-2507, Instruct-2507}Alibaba GroupQwen Team 2025
R1 Distill Llama8B, 70BDeepSeek AIDeepSeek AI 2025
SmolLM2135M, 360M, 1.7BHugging FaceHugging Face 2024
Salamandra2B, 7BBarcelona Supercomputing CentreBSC-LTC 2024
StableCode3BStability AIStability AI 2023
StableLM3B, 7BStability AIStability AI 2023
StableLM Zephyr3BStability AIStability AI 2023
TinyLlama1.1BZhang et al.Zhang et al. 2023

Tip: You can list all available models by running thelitgpt download list command.

 


Workflows

FinetunePretrainContinued pretrainingEvaluateDeployTest

 

Use the command line interface to run advanced workflows such as pretraining or finetuning on your own data.

All workflows

After installing LitGPT, select the model and workflow to run (finetune, pretrain, evaluate, deploy, etc...):

# litgpt [action] [model]litgpt  serve     meta-llama/Llama-3.2-3B-Instructlitgpt  finetune  meta-llama/Llama-3.2-3B-Instructlitgpt  pretrain  meta-llama/Llama-3.2-3B-Instructlitgpt  chat      meta-llama/Llama-3.2-3B-Instructlitgpt  evaluate  meta-llama/Llama-3.2-3B-Instruct

 


Finetune an LLM

Run on Studios

 

Finetuning is the process of taking a pretrained AI model and further training it on a smaller, specialized dataset tailored to a specific task or application.

 

# 0) setup your datasetcurl -L https://huggingface.co/datasets/ksaw008/finance_alpaca/resolve/main/finance_alpaca.json -o my_custom_dataset.json# 1) Finetune a model (auto downloads weights)litgpt finetune microsoft/phi-2 \  --data JSON \  --data.json_path my_custom_dataset.json \  --data.val_split_fraction 0.1 \  --out_dir out/custom-model# 2) Test the modellitgpt chat out/custom-model/final# 3) Deploy the modellitgpt serve out/custom-model/final

Read the full finetuning docs

 


Deploy an LLM

Deploy on Studios

 

Deploy a pretrained or finetune LLM to use it in real-world applications. Deploy, automatically sets up a web server that can be accessed by a website or app.

# deploy an out-of-the-box LLMlitgpt serve microsoft/phi-2# deploy your own trained modellitgpt serve path/to/microsoft/phi-2/checkpoint
Show code to query server:

 

Test the server in a separate terminal and integrate the model API into your AI product:

# 3) Use the server (in a separate Python session)importrequests,jsonresponse=requests.post("http://127.0.0.1:8000/predict",json={"prompt":"Fix typos in the following sentence: Example input"})print(response.json()["output"])

Read the full deploy docs.

 


Evaluate an LLM

Evaluate an LLM to test its performance on various tasks to see how well it understands and generates text. Simply put, we can evaluate things like how well would it do in college-level chemistry, coding, etc... (MMLU, Truthful QA, etc...)

litgpt evaluate microsoft/phi-2 --tasks'truthfulqa_mc2,mmlu'

Read the full evaluation docs.

 


Test an LLM

Run on Studios

 

Test how well the model works via an interactive chat. Use thechat command to chat, extract embeddings, etc...

Here's an example showing how to use the Phi-2 LLM:

litgpt chat microsoft/phi-2>> Prompt: Whatdo Llamas eat?
Full code:

 

# 1) List all supported LLMslitgpt download list# 2) Use a model (auto downloads weights)litgpt chat microsoft/phi-2>> Prompt: Whatdo Llamas eat?

The download of certain models requires an additional access token. You can read more about this in thedownload documentation.

Read the full chat docs.

 


Pretrain an LLM

Run on Studios

 

Pretraining is the process of teaching an AI model by exposing it to a large amount of data before it is fine-tuned for specific tasks.

Show code:

 

mkdir -p custom_textscurl https://www.gutenberg.org/cache/epub/24440/pg24440.txt --output custom_texts/book1.txtcurl https://www.gutenberg.org/cache/epub/26393/pg26393.txt --output custom_texts/book2.txt# 1) Download a tokenizerlitgpt download EleutherAI/pythia-160m \  --tokenizer_only True# 2) Pretrain the modellitgpt pretrain EleutherAI/pythia-160m \  --tokenizer_dir EleutherAI/pythia-160m \  --data TextFiles \  --data.train_data_path"custom_texts/" \  --train.max_tokens 10_000_000 \  --out_dir out/custom-model# 3) Test the modellitgpt chat out/custom-model/final

Read the full pretraining docs

 


Continue pretraining an LLM

Run on Studios

 

Continued pretraining is another way of finetuning that specializes an already pretrained model by training on custom data:

Show code:

 

mkdir -p custom_textscurl https://www.gutenberg.org/cache/epub/24440/pg24440.txt --output custom_texts/book1.txtcurl https://www.gutenberg.org/cache/epub/26393/pg26393.txt --output custom_texts/book2.txt# 1) Continue pretraining a model (auto downloads weights)litgpt pretrain EleutherAI/pythia-160m \  --tokenizer_dir EleutherAI/pythia-160m \  --initial_checkpoint_dir EleutherAI/pythia-160m \  --data TextFiles \  --data.train_data_path"custom_texts/" \  --train.max_tokens 10_000_000 \  --out_dir out/custom-model# 2) Test the modellitgpt chat out/custom-model/final

Read the full continued pretraining docs

 


State-of-the-art features

✅ State-of-the-art optimizations: Flash Attention v2, multi-GPU support via fully-sharded data parallelism,optional CPU offloading, andTPU and XLA support.
Pretrain,finetune, anddeploy
✅ Reduce compute requirements with low-precision settings: FP16, BF16, and FP16/FP32 mixed.
✅ Lower memory requirements withquantization: 4-bit floats, 8-bit integers, and double quantization.
Configuration files for great out-of-the-box performance.
✅ Parameter-efficient finetuning:LoRA,QLoRA,Adapter, andAdapter v2.
Exporting to other popular model weight formats.
✅ Many popular datasets forpretraining andfinetuning, andsupport for custom datasets.
✅ Readable and easy-to-modify code to experiment with the latest research ideas.

 


Training recipes

LitGPT comes with validated recipes (YAML configs) to train models under different conditions. We've generated these recipes based on the parameters we found to perform the best for different training conditions.

Browse all training recipeshere.

Example

litgpt finetune \  --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml
✅ Use configs to customize training

Configs let you customize training for all granular parameters like:

# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)checkpoint_dir:checkpoints/meta-llama/Llama-2-7b-hf# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)out_dir:out/finetune/qlora-llama2-7b# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)precision:bf16-true...
✅ Example: LoRA finetuning config

 

# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)checkpoint_dir:checkpoints/meta-llama/Llama-2-7b-hf# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)out_dir:out/finetune/qlora-llama2-7b# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)precision:bf16-true# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)quantize:bnb.nf4# How many devices/GPUs to use. (type: Union[int, str], default: 1)devices:1# How many nodes to use. (type: int, default: 1)num_nodes:1# The LoRA rank. (type: int, default: 8)lora_r:32# The LoRA alpha. (type: int, default: 16)lora_alpha:16# The LoRA dropout value. (type: float, default: 0.05)lora_dropout:0.05# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)lora_query:true# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)lora_key:false# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)lora_value:true# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)lora_projection:false# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)lora_mlp:false# Whether to apply LoRA to output head in GPT. (type: bool, default: False)lora_head:false# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.data:class_path:litgpt.data.Alpaca2kinit_args:mask_prompt:falseval_split_fraction:0.05prompt_style:alpacaignore_index:-100seed:42num_workers:4download_dir:data/alpaca2k# Training-related arguments. See ``litgpt.args.TrainArgs`` for detailstrain:# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)save_interval:200# Number of iterations between logging calls (type: int, default: 1)log_interval:1# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)global_batch_size:8# Number of samples per data-parallel rank (type: int, default: 4)micro_batch_size:2# Number of iterations with learning rate warmup active (type: int, default: 100)lr_warmup_steps:10# Number of epochs to train on (type: Optional[int], default: 5)epochs:4# Total number of tokens to train on (type: Optional[int], default: null)max_tokens:# Limits the number of optimizer steps to run (type: Optional[int], default: null)max_steps:# Limits the length of samples (type: Optional[int], default: null)max_seq_length:512# Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)tie_embeddings:#   (type: float, default: 0.0003)learning_rate:0.0002#   (type: float, default: 0.02)weight_decay:0.0#   (type: float, default: 0.9)beta1:0.9#   (type: float, default: 0.95)beta2:0.95#   (type: Optional[float], default: null)max_norm:#   (type: float, default: 6e-05)min_lr:6.0e-05# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for detailseval:# Number of optimizer steps between evaluation calls (type: int, default: 100)interval:100# Number of tokens to generate (type: Optional[int], default: 100)max_new_tokens:100# Number of iterations (type: int, default: 100)max_iters:100# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)logger_name:csv# The random seed to use for reproducibility. (type: int, default: 1337)seed:1337
✅ Override any parameter in the CLI:
litgpt finetune \  --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml \  --lora_r 4

 


Project highlights

LitGPT powers many great AI projects, initiatives, challenges and of course enterprises. Please submit a pull request to be considered for a feature.

📊 SAMBA: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling

TheSamba project by researchers at Microsoft is built on top of the LitGPT code base and combines state space models with sliding window attention, which outperforms pure state space models.

🏆 NeurIPS 2023 Large Language Model Efficiency Challenge: 1 LLM + 1 GPU + 1 Day

The LitGPT repository was the official starter kit for theNeurIPS 2023 LLM Efficiency Challenge, which is a competition focused on finetuning an existing non-instruction tuned LLM for 24 hours on a single GPU.

🦙 TinyLlama: An Open-Source Small Language Model

LitGPT powered theTinyLlama project andTinyLlama: An Open-Source Small Language Model research paper.

🍪 MicroLlama: MicroLlama-300M

MicroLlama is a 300M Llama model pretrained on 50B tokens powered by TinyLlama and LitGPT.

🔬 Pre-training Small Base LMs with Fewer Tokens

The research paper"Pre-training Small Base LMs with Fewer Tokens", which utilizes LitGPT, develops smaller base language models by inheriting a few transformer blocks from larger models and training on a tiny fraction of the data used by the larger models. It demonstrates that these smaller models can perform comparably to larger models despite using significantly less training data and resources.

 


Community

We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.

 

Tutorials

🚀Get started
⚡️Finetuning, incl. LoRA, QLoRA, and Adapters
🤖Pretraining
💬Model evaluation
📘Supported and custom datasets
🧹Quantization
🤯Tips for dealing with out-of-memory (OOM) errors
🧑🏽‍💻Using cloud TPUs

 


Acknowledgments

This implementation extends onLit-LLaMA andnanoGPT, and it'spowered byLightning Fabric.

License

LitGPT is released under theApache 2.0 license.

Citation

If you use LitGPT in your research, please cite the following work:

@misc{litgpt-2023,author       ={Lightning AI},title        ={LitGPT},howpublished ={\url{https://github.com/Lightning-AI/litgpt}},year         ={2023},}

 

About

20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors120


[8]ページ先頭

©2009-2025 Movatter.jp