- Notifications
You must be signed in to change notification settings - Fork67
Two simple and effective designs of vision transformer, which is on par with the Swin transformer
License
Meituan-AutoML/Twins
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully-devised yet simple spatial attention mechanism performs favourably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins- PCPVT and Twins-SVT. Our proposed architectures are highly-efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks including image- level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks.
Figure 1. Twins-SVT-S Architecture (Right side shows the inside of two consecutive Transformer Encoders).
First, clone the repository locally:
git clone https://github.com/Meituan-AutoML/Twins.git
Then, install PyTorch 1.7.0+ and torchvision 0.8.1+ andpytorch-image-models==0.3.2:
conda install -c pytorch pytorch torchvisionpip install timm==0.3.2
Download and extract ImageNet train and val images fromhttp://image-net.org/.The directory structure is the standard layout for the torchvisiondatasets.ImageFolder
, and the training and validation data is expected to be in thetrain/
folder andval
folder respectively:
/path/to/imagenet/ train/ class1/ img1.jpeg class2/ img2.jpeg val/ class1/ img3.jpeg class/2 img4.jpeg
We provide a series of Twins models pretrained on ILSVRC2012 ImageNet-1K dataset.
Model | Alias in the paper | Acc@1 | FLOPs(G) | #Params (M) | URL | Log |
---|---|---|---|---|---|---|
PCPVT-Small | Twins-PCPVT-S | 81.2 | 3.7 | 24.1 | pcpvt_small.pth | pcpvt_s.txt |
PCPVT-Base | Twins-PCPVT-B | 82.7 | 6.4 | 43.8 | pcpvt_base.pth | pcpvt_b.txt |
PCPVT-Large | Twins-PCPVT-L | 83.1 | 9.5 | 60.9 | pcpvt_large.pth | pcpvt_l.txt |
ALTGVT-Small | Twins-SVT-S | 81.7 | 2.8 | 24 | alt_gvt_small.pth | svt_s.txt |
ALTGVT-Base | Twins-SVT-B | 83.2 | 8.3 | 56 | alt_gvt_base.pth | svt_b.txt |
ALTGVT-Large | Twins-SVT-L | 83.7 | 14.8 | 99.2 | alt_gvt_large.pth | svt_l.txt |
To train Twins-SVT-B on ImageNet using 8 gpus for 300 epochs, run
python-mtorch.distributed.launch--nproc_per_node=8--use_envmain.py--modelalt_gvt_base--batch-size128--data-pathpath_to_imagenet--dist-eval--drop-path0.3
To evaluate the performance of Twins-SVT-L on ImageNet using one GPU, run
pythonmain.py--eval--resumealt_gvt_large.pth--modelalt_gvt_large--data-pathpath_to_imagenet
This should give
* Acc@1 83.660 Acc@5 96.512 loss 0.722Accuracy of the network on the 50000 test images: 83.7%
Our code is based onmmsegmentation. Please install mmsegmentation first.
We provide a series of Twins models and training logs trained on the Ade20k dataset. It's easy to extend it toother datasets and segmentation methods.
Model | Alias in the paper | mIoU(ss/ms) | FLOPs(G) | #Params (M) | URL | Log |
---|---|---|---|---|---|---|
PCPVT-Small | Twins-PCPVT-S | 46.2/47.5 | 234 | 54.6 | pcpvt_small.pth | pcpvt_s.txt |
PCPVT-Base | Twins-PCPVT-B | 47.1/48.4 | 250 | 74.3 | pcpvt_base.pth | pcpvt_b.txt |
PCPVT-Large | Twins-PCPVT-L | 48.6/49.8 | 269 | 91.5 | pcpvt_large.pth | pcpvt_l.txt |
ALTGVT-Small | Twins-SVT-S | 46.2/47.1 | 228 | 54.4 | alt_gvt_small.pth | svt_s.txt |
ALTGVT-Base | Twins-SVT-B | 47.4/48.9 | 261 | 88.5 | alt_gvt_base.pth | svt_b.txt |
ALTGVT-Large | Twins-SVT-L | 48.8/50.2 | 297 | 133 | alt_gvt_large.pth | svt_l.txt |
To train Twins-PCPVT-Large on Ade20k using 8 gpus for 160k iterations with a global batch size of 16, run
bash dist_train.sh configs/upernet_pcpvt_l_512x512_160k_ade20k_swin_setting.py 8
To evaluate Twins-PCPVT-Large on Ade20k using 8 gpus (single scale), run
bash dist_test.sh configs/upernet_pcpvt_l_512x512_160k_ade20k_swin_setting.py checkpoint_file 8 --eval mIoU
To evaluate Twins-PCPVT-Large on Ade20k using 8 gpus (multi scale), run
bash dist_test.sh configs/upernet_pcpvt_l_512x512_160k_ade20k_swin_setting.py checkpoint_file 8 --eval mIoU --aug-test
Our code is based onmmdetection. Please installmmdetection first (we use v2.8.0).We use both Mask R-CNN and RetinaNet to evaluate our method. It's easy to apply Twins in other detectors provided by mmdetection based on our examples.
To train Twins-SVT-Small on COCO with 8 gpus for 1x schedule (PVT setting) under the framework of Mask R-CNN, run
bash dist_train.sh configs/mask_rcnn_alt_gvt_s_fpn_1x_coco_pvt_setting.py 8
To train Twins-SVT-Small on COCO with 8 gpus for 3x schedule (Swin setting) under the framework of Mask R-CNN, run
bash dist_train.sh configs/mask_rcnn_alt_gvt_s_fpn_3x_coco_swin_setting.py 8
To evaluate the mAP of Twins-SVT-Small on COCO using 8 gpus based on the Retina framework, run
bash dist_test.sh configs/retinanet_alt_gvt_s_fpn_1x_coco_pvt_setting.py checkpoint_file 8 --eval mAP
If you find this project useful in your research, please consider cite the following,
Twins:
@inproceedings{chu2021Twins,title={Twins: Revisiting the Design of Spatial Attention in Vision Transformers},author={Xiangxiang Chu and Zhi Tian and Yuqing Wang and Bo Zhang and Haibing Ren and Xiaolin Wei and Huaxia Xia and Chunhua Shen},booktitle={NeurIPS 2021}, url={https://openreview.net/forum?id=5kTlVBkzSRx},year={2021}}
CPVT:
@inproceedings{chu2023CPVT,title={Conditional Positional Encodings for Vision Transformers},author={Xiangxiang Chu and Zhi Tian and Bo Zhang and Xinlong Wang and Chunhua Shen},booktitle={ICLR 2023},url={https://openreview.net/forum?id=3KWnuT-R1bh},year={2023}}
We heavily borrow the code fromDeiT andPVT.We test throughputs as inSwin Transformer.
This repository is released under the Apache 2.0 license as found in theLICENSE file.