Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A memory-efficient implementation of DenseNets

License

NotificationsYou must be signed in to change notification settings

gpleiss/efficient_densenet_pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

95 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A PyTorch >=1.0 implementation of DenseNets, optimized to save GPU memory.

Recent updates

  1. Now works on PyTorch 1.0! It uses the checkpointing feature, which makes this code WAY more efficient!!!

Motivation

While DenseNets are fairly easy to implement in deep learning frameworks, mostimplmementations (such as theoriginal) tend to be memory-hungry.In particular, the number of intermediate feature maps generated by batch normalization and concatenation operationsgrows quadratically with network depth.It is worth emphasizing that this is not a property inherent to DenseNets, but rather to the implementation.

This implementation uses a new strategy to reduce the memory consumption of DenseNets.We usecheckpointing to compute the Batch Norm and concatenation feature maps.These intermediate feature maps are discarded during the forward pass and recomputed for the backward pass.This adds 15-20% of time overhead for training, butreduces feature map consumption from quadratic to linear.

This implementation is inspired by thistechnical report, which outlines a strategy for efficient DenseNets via memory sharing.

Requirements

  • PyTorch >=1.0.0
  • CUDA

Usage

In your existing project:There is one file in themodels folder.

If you care about speed, and memory is not an option, pass theefficient=False argument into theDenseNet constructor.Otherwise, pass inefficient=True.

Options:

  • All options are described inthe docstrings of the model files
  • The depth is controlled byblock_config option
  • efficient=True uses the memory-efficient version
  • If you want to use the model for ImageNet, setsmall_inputs=False. For CIFAR or SVHN, setsmall_inputs=True.

Running the demo:

The only extra package you need to install ispython-fire:

pip install fire
  • Single GPU:
CUDA_VISIBLE_DEVICES=0 python demo.py --efficient True --data<path_to_folder_with_cifar10> --save<path_to_save_dir>
  • Multiple GPU:
CUDA_VISIBLE_DEVICES=0,1,2 python demo.py --efficient True --data<path_to_folder_with_cifar10> --save<path_to_save_dir>

Options:

  • --depth (int) - depth of the network (number of convolution layers) (default 40)
  • --growth_rate (int) - number of features added per DenseNet layer (default 12)
  • --n_epochs (int) - number of epochs for training (default 300)
  • --batch_size (int) - size of minibatch (default 256)
  • --seed (int) - manually set the random seed (default None)

Performance

A comparison of the two implementations (each is a DenseNet-BC with 100 layers, batch size 64, tested on a NVIDIA Pascal Titan-X):

ImplementationMemory cosumption (GB/GPU)Speed (sec/mini batch)
Naive2.8630.165
Efficient1.6050.207
Efficient (multi-GPU)0.985-

Other efficient implementations

Reference

@article{pleiss2017memory,  title={Memory-Efficient Implementation of DenseNets},  author={Pleiss, Geoff and Chen, Danlu and Huang, Gao and Li, Tongcheng and van der Maaten, Laurens and Weinberger, Kilian Q},  journal={arXiv preprint arXiv:1707.06990},  year={2017}}

About

A memory-efficient implementation of DenseNets

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors10

Languages


[8]ページ先頭

©2009-2025 Movatter.jp