Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Go ahead and axolotl questions

License

NotificationsYou must be signed in to change notification settings

flexaihq/axolotl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Axolotl

A Free and Open Source LLM Fine-tuning Framework

GitHub LicensetestscodecovReleases
contributorsGitHub Repo stars
discordtwittergoogle-colab
tests-nightlymultigpu-semi-weekly tests

🎉 Latest Updates

  • 2025/07:
    • ND Parallelism support has been added into Axolotl. Compose Context Parallelism (CP), Tensor Parallelism (TP), and Fully Sharded Data Parallelism (FSDP) within a single node and across multiple nodes. Check out theblog post for more info.
    • Axolotl adds more models:GPT-OSS,Gemma 3n,Liquid Foundation Model 2 (LFM2), andArcee Foundation Models (AFM).
    • FP8 finetuning with fp8 gather op is now possible in Axolotl viatorchao. Get startedhere!
    • Voxtral,Magistral 1.1, andDevstral with mistral-common tokenizer support has been integrated in Axolotl!
    • TiledMLP support for single-GPU to multi-GPU training with DDP, DeepSpeed and FSDP support has been added to support Arctic Long Sequence Training. (ALST). Seeexamples for using ALST with Axolotl!
  • 2025/05: Quantization Aware Training (QAT) support has been added to Axolotl. Explore thedocs to learn more!
  • 2025/03: Axolotl has implemented Sequence Parallelism (SP) support. Read theblog anddocs to learn how to scale your context length when fine-tuning.
Expand older updates
  • 2025/06: Magistral with mistral-common tokenizer support has been added to Axolotl. Seeexamples to start training your own Magistral models with Axolotl!
  • 2025/04: Llama 4 support has been added in Axolotl. Seeexamples to start training your own Llama 4 models with Axolotl's linearized version!
  • 2025/03: (Beta) Fine-tuning Multimodal models is now supported in Axolotl. Check out thedocs to fine-tune your own!
  • 2025/02: Axolotl has added LoRA optimizations to reduce memory usage and improve training speed for LoRA and QLoRA in single GPU and multi-GPU training (DDP and DeepSpeed). Jump into thedocs to give it a try.
  • 2025/02: Axolotl has added GRPO support. Dive into ourblog andGRPO example and have some fun!
  • 2025/01: Axolotl has added Reward Modelling / Process Reward Modelling fine-tuning support. Seedocs.

✨ Overview

Axolotl is a free and open-source tool designed to streamline post-training and fine-tuning for the latest large language models (LLMs).

Features:

🚀 Quick Start - LLM Fine-tuning in Minutes

Requirements:

  • NVIDIA GPU (Ampere or newer forbf16 and Flash Attention) or AMD GPU
  • Python 3.11
  • PyTorch ≥2.7.1

Google Colab

Open In Colab

Installation

Using pip

pip3 install -U packaging==23.2 setuptools==75.8.0 wheel ninjapip3 install --no-build-isolation axolotl[flash-attn,deepspeed]# Download example axolotl configs, deepspeed configsaxolotl fetch examplesaxolotl fetch deepspeed_configs# OPTIONAL

Using Docker

Installing with Docker can be less error prone than installing in your own environment.

docker run --gpus'"all"' --rm -it axolotlai/axolotl:main-latest

Other installation approaches are describedhere.

Cloud Providers

Your First Fine-tune

# Fetch axolotl examplesaxolotl fetch examples# Or, specify a custom pathaxolotl fetch examples --dest path/to/folder# Train a model using LoRAaxolotl train examples/llama-3/lora-1b.yml

That's it! Check out ourGetting Started Guide for a more detailed walkthrough.

📚 Documentation

🤝 Getting Help

🌟 Contributing

Contributions are welcome! Please see ourContributing Guide for details.

❤️ Sponsors

Interested in sponsoring? Contact us atwing@axolotl.ai

📝 Citing Axolotl

If you use Axolotl in your research or projects, please cite it as follows:

@software{axolotl,title ={Axolotl: Open Source LLM Post-Training},author ={{Axolotl maintainers and contributors}},url ={https://github.com/axolotl-ai-cloud/axolotl},license ={Apache-2.0},year ={2023}}

📜 License

This project is licensed under the Apache 2.0 License - see theLICENSE file for details.

About

Go ahead and axolotl questions

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python96.0%
  • Jinja3.4%
  • Other0.6%

[8]ページ先頭

©2009-2025 Movatter.jp