Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

[NeurIPS 2021] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification

License

NotificationsYou must be signed in to change notification settings

the-praxs/DynamicViT

 
 

Repository files navigation

This repository contains PyTorch implementation for DynamicViT (NeurIPS 2021).

DynamicViT is a dynamic token sparsification framework to prune redundant tokens in vision transformers progressively and dynamically based on the input. Our method can reduces over30% FLOPs and improves the throughput by over40% while the drop of accuracy is within0.5% for various vision transformers.

[Project Page][arXiv (NeurIPS 2021)]

🔥Updates

We extend our method to morenetwork architectures (i.e., ConvNeXt and Swin Transformers) and moretasks (i.e., object detection and semantic segmentation) with an improveddynamic spatial sparsification framework. Please refer to the extended version of our paper for details. The extended version has been accepted by T-PAMI.

[arXiv (T-PAMI, Journal Version)]

Image Examples

intro


Video Examples

result1

Model Zoo

We provide our DynamicViT models pretrained on ImageNet:

namemodelrhoacc@1acc@5FLOPsurl
DynamicViT-DeiT-256/0.7deit-2560.776.5393.121.3GGoogle Drive /Tsinghua Cloud
DynamicViT-DeiT-S/0.7deit-s0.779.3294.682.9GGoogle Drive /Tsinghua Cloud
DynamicViT-DeiT-B/0.7deit-b0.781.4395.4611.4GGoogle Drive /Tsinghua Cloud
DynamicViT-LVViT-S/0.5lvvit-s0.581.9795.763.7GGoogle Drive /Tsinghua Cloud
DynamicViT-LVViT-S/0.7lvvit-s0.783.0896.254.6GGoogle Drive /Tsinghua Cloud
DynamicViT-LVViT-M/0.7lvvit-m0.783.8296.588.5GGoogle Drive /Tsinghua Cloud

🔥Updates: We provide our DynamicCNN and DynamicSwin models pretrained on ImageNet:

namemodelrhoacc@1acc@5FLOPsurl
DynamicCNN-T/0.7convnext-t0.781.5995.723.6GGoogle Drive /Tsinghua Cloud
DynamicCNN-T/0.9convnext-t0.982.0695.893.9GGoogle Drive /Tsinghua Cloud
DynamicCNN-S/0.7convnext-s0.782.5796.295.8GGoogle Drive /Tsinghua Cloud
DynamicCNN-S/0.9convnext-s0.983.1296.426.8GGoogle Drive /Tsinghua Cloud
DynamicCNN-B/0.7convnext-b0.783.4596.5610.2GGoogle Drive /Tsinghua Cloud
DynamicCNN-B/0.9convnext-b0.983.9696.7611.9GGoogle Drive /Tsinghua Cloud
DynamicSwin-T/0.7swin-t0.780.9195.424.0GGoogle Drive /Tsinghua Cloud
DynamicSwin-S/0.7swin-s0.783.2196.336.9GGoogle Drive /Tsinghua Cloud
DynamicSwin-B/0.7swin-b0.783.4396.4512.1GGoogle Drive /Tsinghua Cloud

Usage

Requirements

  • torch>=1.8.0
  • torchvision>=0.9.0
  • timm==0.3.2
  • tensorboardX
  • six
  • fvcore

Data preparation: download and extract ImageNet images fromhttp://image-net.org/. The directory structure should be

│ILSVRC2012/├──train/│  ├── n01440764│  │   ├── n01440764_10026.JPEG│  │   ├── n01440764_10027.JPEG│  │   ├── ......│  ├── ......├──val/│  ├── n01440764│  │   ├── ILSVRC2012_val_00000293.JPEG│  │   ├── ILSVRC2012_val_00002138.JPEG│  │   ├── ......│  ├── ......

Model preparation: download pre-trained models if necessary:

modelurlmodelurl
DeiT-SmalllinkLVViT-Slink
DeiT-BaselinkLVViT-Mlink
ConvNeXt-TlinkSwin-Tlink
ConvNeXt-SlinkSwin-Slink
ConvNeXt-BlinkSwin-Blink

Demo

You can try DynamicViT on Colab. Thank@dirtycomputer for the contribution.

We also provide aJupyter notebook where you can run the visualization of DynamicViT.

To run the demo, you need to installmatplotlib.

demo

Evaluation

To evaluate a pre-trained DynamicViT model on the ImageNet validation set with a single GPU, run:

python infer.py --data_path /path/to/ILSVRC2012/ --model model_name \--model_path /path/to/model --base_rate 0.7

Training

To train Dynamic Spatial Sparsification models on ImageNet, run:

(You can train models with different keeping ratio by adjustingbase_rate. )

DeiT-S

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --output_dir logs/dynamicvit_deit-s --model deit-s --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 30 --base_rate 0.7 --lr 1e-3 --warmup_epochs 5

DeiT-B

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --output_dir logs/dynamicvit_deit-b --model deit-b --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 30 --base_rate 0.7 --lr 1e-3 --warmup_epochs 5 --drop_path 0.2 --ratio_weight 5.0

LV-ViT-S

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --output_dir logs/dynamicvit_lvvit-s --model lvvit-s --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 30 --base_rate 0.7 --lr 1e-3 --warmup_epochs 5

LV-ViT-M

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --output_dir logs/dynamicvit_lvvit-m --model lvvit-m --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 30 --base_rate 0.7 --lr 1e-3 --warmup_epochs 5

DynamicViT can also achieve comparable performance with only 15 epochs training (around 0.1% lower accuracy compared to 30 epochs).

ConvNeXt-T

Train on 8 GPUs:

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --output_dir logs/dynamic_conv-t --model convnext-t --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.2 --update_freq 4 --lr_scale 0.2

Train on 4 8-GPU nodes:

python run_with_submitit.py --nodes 4 --ngpus 8 --output_dir logs/dynamic_conv-t --model convnext-t --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.2 --update_freq 1 --lr_scale 0.2

ConvNeXt-S

Train on 8 GPUs:

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --output_dir logs/dynamic_conv-s --model convnext-s --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.2 --update_freq 4 --lr_scale 0.2

Train on 4 8-GPU nodes:

python run_with_submitit.py --nodes 4 --ngpus 8 --output_dir logs/dynamic_conv-s --model convnext-s --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.2 --update_freq 1 --lr_scale 0.2

ConvNeXt-B

Train on 8 GPUs:

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --output_dir logs/dynamic_conv-b --model convnext-b --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.5 --update_freq 4 --lr_scale 0.2

Train on 4 8-GPU nodes:

python run_with_submitit.py --nodes 4 --ngpus 8 --output_dir logs/dynamic_conv-b --model convnext-b --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.5 --update_freq 1 --lr_scale 0.2

Swin-T

Train on 8 GPUs:

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --output_dir logs/dynamic_swin-t --model swin-t --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.2 --update_freq 4 --lr_scale 0.2

Train on 4 8-GPU nodes:

python run_with_submitit.py --nodes 4 --ngpus 8 --output_dir logs/dynamic_swin-t --model swin-t --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.2 --update_freq 1 --lr_scale 0.2

Swin-S

Train on 8 GPUs:

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --output_dir logs/dynamic_swin-s --model swin-s --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.2 --update_freq 4 --lr_scale 0.2

Train on 4 8-GPU nodes:

python run_with_submitit.py --nodes 4 --ngpus 8 --output_dir logs/dynamic_swin-s --model swin-s --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.2 --update_freq 1 --lr_scale 0.2

Swin-B

Train on 8 GPUs:

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --output_dir logs/dynamic_swin-b --model swin-b --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.5 --update_freq 4 --lr_scale 0.2

Train on 4 8-GPU nodes:

python run_with_submitit.py --nodes 4 --ngpus 8 --output_dir logs/dynamic_swin-b --model swin-b --input_size 224 --batch_size 128 --data_path /path/to/ILSVRC2012/ --epochs 120 --base_rate 0.7 --lr 4e-3 --drop_path 0.5 --update_freq 1 --lr_scale 0.2

License

MIT License

Acknowledgements

Our code is based onpytorch-image-models,DeiT,LV-ViT,ConvNeXt andSwin-Transformer.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{rao2021dynamicvit,  title={DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification},  author={Rao, Yongming and Zhao, Wenliang and Liu, Benlin and Lu, Jiwen and Zhou, Jie and Hsieh, Cho-Jui},  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},  year = {2021}}
@article{rao2022dynamicvit,  title={Dynamic Spatial Sparsification for Efficient Vision Transformers and Convolutional Neural Networks},  author={Rao, Yongming and Liu, Zuyan and Zhao, Wenliang and Zhou, Jie and Lu, Jiwen},  journal={arXiv preprint arXiv:2207.01580},  year={2022}

About

[NeurIPS 2021] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook72.7%
  • Python27.3%

[8]ページ先頭

©2009-2025 Movatter.jp