Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A PyTorch native platform for training generative AI models

License

NotificationsYou must be signed in to change notification settings

pytorch/torchtitan

A PyTorch native platform for training generative AI models

8 GPU Feature Tests8 GPU Model TestsarXivICLRforumlicensepipconda

torchtitan is under extensive development. To use the latest features oftorchtitan, we recommend using the most recent PyTorch nightly.

Latest News

  • [2025/11] AMD released anoptimized fork oftorchtitan for AMD GPUs.
  • [2025/10] We releasedtorchtitanv0.2.0.
  • [2025/10] SkyPilot now supportstorchtitan! See the tutorialhere.
  • [2025/07] We publishedinstructions on how to add a model totorchtitan.
  • [2025/04] Our paper was accepted byICLR 2025.
  • [2024/12] GPU MODElecture on torchtitan.
  • [2024/07]Presentation at PyTorch Conference 2024.

Overview

torchtitan is a PyTorch native platform designed forrapid experimentation and large-scale training of generative AI models. As a minimal clean-room implementation of PyTorch native scaling techniques,torchtitan provides a flexible foundation for developers to build upon. Withtorchtitanextension points, one can easily create custom extensions tailored to specific needs.

Our mission is to accelerate innovation in the field of generative AI by empowering researchers and developers to explore new modeling architectures and infrastructure techniques.

The Guiding Principles when buildingtorchtitan

  • Designed to be easy to understand, use and extend for different training purposes.
  • Minimal changes to the model code when applying multi-dimensional parallelism.
  • Bias towards a clean, minimal codebase while providing basic reusable / swappable components.

torchtitan has been showcasing PyTorch's latest distributed training features, via support for pretraining Llama 3.1 LLMs of various sizes.

Contributing

We look forward to your contributions!

  • To accelerate contributions to and innovations around torchtitan, we host anexperiments folder. New ideas should start there. To contribute, follow theexperiments guidelines.
  • For fixes and contributions to core, follow theseguidelines.

Llama 3.1 training

Key features available

  1. Multi-dimensional composable parallelisms
  2. Meta device initialization
  3. Selective (layer or operator) and full activation checkpointing
  4. Distributed checkpointing (including async checkpointing)
  5. torch.compile support
  6. Float8 support (how-to)
  7. MXFP8 training for dense and MoE models on Blackwell GPUs.
  8. DDP and HSDP
  9. TorchFT integration
  10. Checkpointable data-loading, with the C4 dataset pre-configured (144M entries) and support forcustom datasets
  11. Gradient accumulation, enabled by giving an additional--training.global_batch_size argument in configuration
  12. Flexible learning rate scheduler (warmup-stable-decay)
  13. Loss, GPU memory, throughput (tokens/sec), TFLOPs, and MFU displayed and logged viaTensorboard or Weights & Biases
  14. Debugging tools including CPU/GPU profiling, memory profiling, Flight Recorder, etc.
  15. All options easily configured viatoml files
  16. Helper scripts to
    • download tokenizers from Hugging Face
    • convert original Llama 3 checkpoints into the expected DCP format
    • estimate FSDP/HSDP memory usage without materializing the model
    • run distributed inference with Tensor Parallel

We reportperformance on up to 512 GPUs, and verifyloss converging correctness of various techniques.

Dive into the code

You may want to see how the model is defined or how parallelism techniques are applied. For a guided tour, see these files first:

Installation

One can directly run the source code, or installtorchtitan from a nightly build, or a stable release.

From source

This method requires the nightly build of PyTorch, or the latest PyTorch builtfrom source.

git clone https://github.com/pytorch/torchtitancd torchtitanpip install -r requirements.txt

Nightly builds

This method requires the nightly build of PyTorch. You can replacecu126 with another version of cuda (e.g.cu128) or an AMD GPU (e.g.rocm6.3).

pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu126 --force-reinstallpip install --pre torchtitan --index-url https://download.pytorch.org/whl/nightly/cu126

Stable releases

One can install the lateststable release oftorchtitan viapip orconda.

pip install torchtitan
conda install conda-forge::torchtitan

Note that each stable release pins the nightly versions oftorch andtorchao. Please seerelease.md for more details.

Downloading a tokenizer

torchtitan currently supports training Llama 3.1 (8B, 70B, 405B) out of the box. To get started training these models, we need to download the tokenizer. Follow the instructions on the officialmeta-llama repository to ensure you have access to the Llama model weights.

Once you have confirmed access, you can run the following command to download the Llama 3.1 tokenizer to your local machine.

# Get your HF token from https://huggingface.co/settings/tokens# Llama 3.1 tokenizerpython scripts/download_hf_assets.py --repo_id meta-llama/Llama-3.1-8B --assets tokenizer --hf_token=...

Start a training run

Llama 3 8B model locally on 8 GPUs

CONFIG_FILE="./torchtitan/models/llama3/train_configs/llama3_8b.toml" ./run_train.sh

Multi-Node Training

For training on ParallelCluster/Slurm type configurations, you can use themultinode_trainer.slurm file to submit your sbatch job.

To get started adjust the number of nodes and GPUs

#SBATCH --ntasks=2#SBATCH --nodes=2

Then start a run wherennodes is your total node count, matching the sbatch node count above.

srun torchrun --nnodes 2

If your gpu count per node is not 8, adjust--nproc_per_node in the torchrun command and#SBATCH --gpus-per-task in the SBATCH command section.

Citation

We provide a detailed look into the parallelisms and optimizations available intorchtitan, along with summary advice on when to use various techniques.

TorchTitan: One-stop PyTorch native solution for production ready LLM pre-training

@inproceedings{   liang2025torchtitan,   title={TorchTitan: One-stop PyTorch native solution for production ready {LLM} pretraining},   author={Wanchao Liang and Tianyu Liu and Less Wright and Will Constable and Andrew Gu and Chien-Chin Huang and Iris Zhang and Wei Feng and Howard Huang and Junjie Wang and Sanket Purandare and Gokul Nadathur and Stratos Idreos},   booktitle={The Thirteenth International Conference on Learning Representations},   year={2025},   url={https://openreview.net/forum?id=SFN6Wm7YBI}}

License

Source code is made available under aBSD 3 license, however you may have other legal obligations that govern your use of other content linked in this repository, such as the license or terms of service for third-party data and models.

About

A PyTorch native platform for training generative AI models

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2026 Movatter.jp