Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

License

NotificationsYou must be signed in to change notification settings

pytorch/ignite

imageimageimageimageimage
imageimageimageimageimage
imageimageimage
imageTwitterdiscordnumfocus
imagelink

TL;DR

Ignite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

PyTorch-Ignite teaser

Click on the image to see complete code

Features

  • Less code than pure PyTorchwhile ensuring maximum control and simplicity

  • Library approach and no program's control inversion -Use ignite where and when you need

  • Extensible API for metrics, experiment managers, and other components

Table of Contents

Why Ignite?

Ignite is alibrary that provides three high-level features:

  • Extremely simple engine and event system
  • Out-of-the-box metrics to easily evaluate models
  • Built-in handlers to compose training pipeline, save artifacts and log parameters and metrics

Simplified training and validation loop

No more codingfor/while loops on epochs and iterations. Users instantiate engines and run them.

Example
fromignite.engineimportEngine,Events,create_supervised_evaluatorfromignite.metricsimportAccuracy# Setup training engine:deftrain_step(engine,batch):# Users can do whatever they need on a single iteration# Eg. forward/backward pass for any number of models, optimizers, etc# ...trainer=Engine(train_step)# Setup single model evaluation engineevaluator=create_supervised_evaluator(model,metrics={"accuracy":Accuracy()})defvalidation():state=evaluator.run(validation_data_loader)# print computed metricsprint(trainer.state.epoch,state.metrics)# Run model's validation at the end of each epochtrainer.add_event_handler(Events.EPOCH_COMPLETED,validation)# Start the trainingtrainer.run(training_data_loader,max_epochs=100)

Power of Events & Handlers

The cool thing with handlers is that they offer unparalleled flexibility (compared to, for example, callbacks). Handlers can be any function: e.g. lambda, simple function, class method, etc. Thus, we do not require to inherit from an interface and override its abstract methods which could unnecessarily bulk up your code and its complexity.

Execute any number of functions whenever you wish

Examples
trainer.add_event_handler(Events.STARTED,lambda_:print("Start training"))# attach handler with args, kwargsmydata= [1,2,3,4]logger= ...defon_training_ended(data):print(f"Training is ended. mydata={data}")# User can use variables from another scopelogger.info("Training is ended")trainer.add_event_handler(Events.COMPLETED,on_training_ended,mydata)# call any number of functions on a single eventtrainer.add_event_handler(Events.COMPLETED,lambdaengine:print(engine.state.times))@trainer.on(Events.ITERATION_COMPLETED)deflog_something(engine):print(engine.state.output)

Built-in events filtering

Examples
# run the validation every 5 epochs@trainer.on(Events.EPOCH_COMPLETED(every=5))defrun_validation():# run validation# change some training variable once on 20th epoch@trainer.on(Events.EPOCH_STARTED(once=20))defchange_training_variable():# ...# Trigger handler with customly defined frequency@trainer.on(Events.ITERATION_COMPLETED(event_filter=first_x_iters))deflog_gradients():# ...

Stack events to share some actions

Examples

Events can be stacked together to enable multiple calls:

@trainer.on(Events.COMPLETED|Events.EPOCH_COMPLETED(every=10))defrun_validation():# ...

Custom events to go beyond standard events

Examples

Custom events related to backward and optimizer step calls:

fromignite.engineimportEventEnumclassBackpropEvents(EventEnum):BACKWARD_STARTED='backward_started'BACKWARD_COMPLETED='backward_completed'OPTIM_STEP_COMPLETED='optim_step_completed'defupdate(engine,batch):# ...loss=criterion(y_pred,y)engine.fire_event(BackpropEvents.BACKWARD_STARTED)loss.backward()engine.fire_event(BackpropEvents.BACKWARD_COMPLETED)optimizer.step()engine.fire_event(BackpropEvents.OPTIM_STEP_COMPLETED)# ...trainer=Engine(update)trainer.register_events(*BackpropEvents)@trainer.on(BackpropEvents.BACKWARD_STARTED)deffunction_before_backprop(engine):# ...

Out-of-the-box metrics

Example
precision=Precision(average=False)recall=Recall(average=False)F1_per_class= (precision*recall*2/ (precision+recall))F1_mean=F1_per_class.mean()# torch mean methodF1_mean.attach(engine,"F1")

Installation

Frompip:

pip install pytorch-ignite

Fromconda:

conda install ignite -c pytorch

From source:

pip install git+https://github.com/pytorch/ignite

Nightly releases

From pip:

pip install --pre pytorch-ignite

From conda (this suggests to installpytorch nightly release instead of stableversion as dependency):

conda install ignite -c pytorch-nightly

Docker Images

Using pre-built images

Pull a pre-built docker image fromour Docker Hub and run it with docker v19.03+.

docker run --gpus all -it -v$PWD:/workspace/project --network=host --shm-size 16G pytorchignite/base:latest /bin/bash
List of available pre-built images

Base

  • pytorchignite/base:latest
  • pytorchignite/apex:latest
  • pytorchignite/hvd-base:latest
  • pytorchignite/hvd-apex:latest
  • pytorchignite/msdp-apex:latest

Vision:

  • pytorchignite/vision:latest
  • pytorchignite/hvd-vision:latest
  • pytorchignite/apex-vision:latest
  • pytorchignite/hvd-apex-vision:latest
  • pytorchignite/msdp-apex-vision:latest

NLP:

  • pytorchignite/nlp:latest
  • pytorchignite/hvd-nlp:latest
  • pytorchignite/apex-nlp:latest
  • pytorchignite/hvd-apex-nlp:latest
  • pytorchignite/msdp-apex-nlp:latest

For more details, seehere.

Getting Started

Few pointers to get you started:

Documentation

Additional Materials

Examples

Tutorials

Reproducible Training Examples

Inspired bytorchvision/references,we provide several reproducible baselines for vision tasks:

  • ImageNet - logs on Ignite Trains server coming soon ...
  • Pascal VOC2012 - logs on Ignite Trains server coming soon ...

Features:

Code-Generator application

The easiest way to create your training scripts with PyTorch-Ignite:

Communication

User feedback

We have created a form for"user feedback". Weappreciate any type of feedback, and this is how we would like to see ourcommunity:

  • If you like the project and want to say thanks, this the rightplace.
  • If you do not like something, please, share it with us, and we cansee how to improve it.

Thank you!

Contributing

Please see thecontribution guidelines for more information.

As always, PRs are welcome :)

Projects using Ignite

Research papers
Blog articles, tutorials, books
Toolkits
Others

See other projects at"Used by"

If your project implements a paper, represents other use-cases notcovered in our official tutorials, Kaggle competition's code, or justyour code presents interesting results and uses Ignite. We would like toadd your project to this list, so please send a PR with briefdescription of the project.

Citing Ignite

If you use PyTorch-Ignite in a scientific publication, we would appreciate citations to our project.

@misc{pytorch-ignite,  author = {V. Fomin and J. Anmol and S. Desroziers and J. Kriss and A. Tejani},  title = {High-level library to help with training neural networks in PyTorch},  year = {2020},  publisher = {GitHub},  journal = {GitHub repository},  howpublished = {\url{https://github.com/pytorch/ignite}},}

About the team & Disclaimer

PyTorch-Ignite is aNumFOCUS Affiliated Project, operated and maintained by volunteers in the PyTorch community in their capacities as individuals(and not as representatives of their employers). See the"About us"page for a list of core contributors. For usage questions and issues, please see the various channelshere. For all other questions and inquiries, please send an emailtocontact@pytorch-ignite.ai.


[8]ページ先頭

©2009-2025 Movatter.jp