Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A PyTorch implementation of the iterative pruning method described in Han et. al. (2015)

NotificationsYou must be signed in to change notification settings

ruihangdu/PyTorch-Deep-Compression

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A PyTorch implementation of the iterative pruning method described in Han et. al. (2015)The original paper:Learning both Weights and Connections for Efficient Neural Networks

Usage

Thelibs package contains utilities needed,andcompressor.py defines aCompressor class that allows pruning a network layer-by-layer.

The fileiterative_pruning.py contains functioniter_prune which achieves iterative pruning.

An example use of the function is described in the main function in the same file.Please devise your own script and do

fromiterative_pruningimport*

to import all necessary modules and run your script as follows.

python your_script.py [-h] [--data DIR] [--arch ARCH] [-j N] [-b N]                            [-o O] [-m E] [-c I] [--lr LR] [--momentum M]                            [--weight_decay W] [--resume PATH] [--pretrained]                            [-t T [T ...]] [--cuda]

optional arguments:

  -h, --help            show thishelp message andexit  --data DIR, -d DIR    path to dataset  --arch ARCH, -a ARCH  model architecture: alexnet| densenet121|                        densenet161| densenet169| densenet201| inception_v3| resnet101| resnet152| resnet18| resnet34|                        resnet50| squeezenet1_0| squeezenet1_1| vgg11|                        vgg11_bn| vgg13| vgg13_bn| vgg16| vgg16_bn| vgg19| vgg19_bn  -j N, --workers N     number of data loading workers (default: 4)  -b N, --batch-size N  mini-batch size (default: 256)  -o O, --optimizer O   optimizers: ASGD| Adadelta| Adagrad| Adam| Adamax| LBFGS| Optimizer| RMSprop| Rprop| SGD|                        SparseAdam (default: SGD)  -m E, --max_epochs E  max number of epochswhile training  -c I, --interval I    checkpointing interval  --lr LR, --learning-rate LR                        initial learning rate  --momentum M          momentum  --weight_decay W, --wd W                        weight decay  --resume PATH         path to latest checkpoint (default: none)  --pretrained          use pre-trained model  -t T [T ...], --topk T [T ...]                        Top k precision metrics  --cuda

(other architectures in torch.vision package can also be chosen, but have not been experimented on). DATA_LOCATION should be replaced with the location of the ImageNet dataset on your machine.

Results

ModelTop-1Top-5Compression Rate
LeNet-300-10092%N/A92%
LeNet-598.8%N/A92%
AlexNet39%63%85.99%

Note: To achieve better results, try to tweak the alpha hyper-parameter in functionprune() to change the pruning rate of each layer.

Any comments, thoughts, and improvements are appreciated

About

A PyTorch implementation of the iterative pruning method described in Han et. al. (2015)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp