- Notifications
You must be signed in to change notification settings - Fork159
Official Pytorch implementation of CutMix regularizer
License
clovaai/CutMix-PyTorch
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Official Pytorch implementation of CutMix regularizer |Paper |Pretrained Models
Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo.
Clova AI Research, NAVER Corp.
Our implementation is based on these repositories:
Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers.They have proved to be effective for guiding the model to attend on less discriminative parts of objects(e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities.On the other hand, current methods for regional dropout removes informative pixels on training images by overlaying a patch of either black pixels or random noise.Such removal is not desirable because it leads to information loss and inefficiency during training.We therefore propose theCutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches.By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on the ImageNet weakly-supervised localization task.Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks.We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances.
23 May, 2019: Initial upload
- Python3
- PyTorch (> 1.0)
- torchvision (> 0.2)
- NumPy
- CIFAR-100: We used 2 GPUs to train CIFAR-100.
python train.py \--net_type pyramidnet \--dataset cifar100 \--depth 200 \--alpha 240 \--batch_size 64 \--lr 0.25 \--expname PyraNet200 \--epochs 300 \--beta 1.0 \--cutmix_prob 0.5 \--no-verbose- ImageNet: We used 4 GPUs to train ImageNet.
python train.py \--net_type resnet \--dataset imagenet \--batch_size 256 \--lr 0.1 \--depth 50 \--epochs 300 \--expname ResNet50 \-j 40 \--beta 1.0 \--cutmix_prob 1.0 \--no-verbosepython test.py \--net_type pyramidnet \--dataset cifar100 \--batch_size 64 \--depth 200 \--alpha 240 \--pretrained /set/your/model/path/model_best.pth.tarpython test.py \--net_type resnet \--dataset imagenet \--batch_size 64 \--depth 50 \--pretrained /set/your/model/path/model_best.pth.tar- PyramidNet-200 pretrained on CIFAR-100 dataset:
| Method | Top-1 Error | Model file |
|---|---|---|
| PyramidNet-200 [CVPR'17] (baseline) | 16.45 | model |
| PyramidNet-200 +CutMix | 14.23 | model |
| PyramidNet-200 + Shakedrop [arXiv'18] +CutMix | 13.81 | - |
| PyramidNet-200 + Mixup [ICLR'18] | 15.63 | model |
| PyramidNet-200 + Manifold Mixup [ICML'19] | 16.14 | model |
| PyramidNet-200 + Cutout [arXiv'17] | 16.53 | model |
| PyramidNet-200 + DropBlock [NeurIPS'18] | 15.73 | model |
| PyramidNet-200 + Cutout + Labelsmoothing | 15.61 | model |
| PyramidNet-200 + DropBlock + Labelsmoothing | 15.16 | model |
| PyramidNet-200 + Cutout + Mixup | 15.46 | model |
- ResNet models pretrained on ImageNet dataset:
| Method | Top-1 Error | Model file |
|---|---|---|
| ResNet-50 [CVPR'16] (baseline) | 23.68 | model |
| ResNet-50 +CutMix | 21.40 | model |
| ResNet-50 +Feature CutMix | 21.80 | model |
| ResNet-50 + Mixup [ICLR'18] | 22.58 | model |
| ResNet-50 + Manifold Mixup [ICML'19] | 22.50 | model |
| ResNet-50 + Cutout [arXiv'17] | 22.93 | model |
| ResNet-50 + AutoAugment [CVPR'19] | 22.40* | - |
| ResNet-50 + DropBlock [NeurIPS'18] | 21.87* | - |
| ResNet-101 +CutMix | 20.17 | model |
| ResNet-152 +CutMix | 19.20 | model |
| ResNeXt-101 (32x4d) +CutMix | 19.47 | model |
* denotes results reported in the original papers
| Backbone | ImageNet Cls (%) | ImageNet Loc (%) | CUB200 Loc (%) | Detection (SSD) (mAP) | Detection (Faster-RCNN) (mAP) | Image Captioning (BLEU-4) |
|---|---|---|---|---|---|---|
| ResNet50 | 23.68 | 46.3 | 49.41 | 76.7 | 75.6 | 22.9 |
| ResNet50+Mixup | 22.58 | 45.84 | 49.3 | 76.6 | 73.9 | 23.2 |
| ResNet50+Cutout | 22.93 | 46.69 | 52.78 | 76.8 | 75 | 24.0 |
| ResNet50+CutMix | 21.60 | 46.25 | 54.81 | 77.6 | 76.7 | 24.9 |
- Pytorch-CutMix by @hysts
- TensorFlow-CutMix by @jis478
@inproceedings{yun2019cutmix, title={CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features}, author={Yun, Sangdoo and Han, Dongyoon and Oh, Seong Joon and Chun, Sanghyuk and Choe, Junsuk and Yoo, Youngjoon}, booktitle = {International Conference on Computer Vision (ICCV)}, year={2019}, pubstate={published}, tppubtype={inproceedings}}Copyright (c) 2019-present NAVER Corp.Permission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included inall copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS INTHE SOFTWARE.About
Official Pytorch implementation of CutMix regularizer
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.