Tutorials
End-to-end complete hands-on PyTorch-Ignite tutorials with interactive Google Colab Notebooks.
Beginner
Welcome toPyTorch-Ignite’s quick start guide that covers theessentials of getting a project up and running while walking throughbasic concepts of Ignite. In just a few lines of code, you can get yourmodel trained and validated. The complete code can be found at the endof this guide.
In this tutorial we will fine tune a model from the Transformers library for text classification using PyTorch-Ignite. We will be following theFine-tuning a pretrained model tutorial for preprocessing text and defining the model, optimizer and dataloaders.
Intermediate
This tutorial is a brief introduction on how you can do distributed training with Ignite on one or more CPUs, GPUs or TPUs. We will also introduce several helper functions and Ignite concepts (setup common training handlers, save to/ load from checkpoints, etc.) which you can easily incorporate in your code.
This tutorial is a brief introduction on how you can train a machine translation model (or any other seq2seq model) using PyTorch Ignite.This notebook uses Models, Dataset and Tokenizers from Huggingface, hence they can be easily replaced by other models from the 🤗 Hub.
In this tutorial we will implement apolicy gradient based algorithm calledReinforce and use it to solve OpenAI’sCartpole problem using PyTorch-Ignite.
Advanced
In this tutorial, we will see how to use advanced distributed functions likeall_reduce()
,all_gather()
,broadcast()
andbarrier()
. We will discuss unique use cases for all of them and represent them visually.
Other Tutorials
- Text Classification using Convolutional NeuralNetworks
- Variational AutoEncoders
- Convolutional Neural Networks for Classifying Fashion-MNISTDataset
- Training Cycle-GAN on Horses toZebras with Nvidia/Apex - logs on W&B
- Another training Cycle-GAN on Horses toZebras with Native Torch CUDA AMP -logs on W&B
- Finetuning EfficientNet-B0 onCIFAR100
- Hyperparameters tuning withAx
- Benchmark mixed precision training on Cifar100:torch.cuda.amp vs nvidia/apex
Reproducible Training Examples
Inspired bytorchvision/references,we provide several reproducible baselines for vision tasks:
Features:
- Distributed training: native or horovod and usingPyTorch native AMP