- Notifications
You must be signed in to change notification settings - Fork49
Oriented Response Networks, in CVPR 2017
License
ZhouYanzhao/ORN
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
[Home][Project][Paper][Supp][Poster]
- Tested with PyTorch 1.12.0 (Ubuntu / GTX 2080 Ti)
- A New helper function
upgrade_to_orn
for easy model conversion. - Predefined ORN-upgraded models (OR-VGG, OR-ResNet, OR-Inception, OR-WRN, etc.).
Please check thepytorch-v2 branch for more details.
Thetorch branch contains:
- the officialtorch implementation of ORN.
- theMNIST-Variants demo.
Please follow the instruction below to install it and run the experiment demo.
- Linux (tested on ubuntu 14.04LTS)
- NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
- Torch7
You can setup everything via a single commandwget -O - https://git.io/vHCMI | bash
or do it manually in case something goes wrong:
install the dependencies (required by the demo code):
clone the torch branch:
# git version must be greater than 1.9.10git clone https://github.com/ZhouYanzhao/ORN.git -b torch --single-branch ORN.torchcd ORN.torchexport DIR=$(pwd)
install ORN:
cd$DIR/install# install the CPU/GPU/CuDNN version ORN.bash install.sh
unzip the MNIST dataset:
cd$DIR/demo/datasetsunzip MNIST
run the MNIST-Variants demo:
cd$DIR/demo# you can modify the script to test different hyper-parametersbash ./scripts/Train_MNIST.sh
If you run into'cudnn.find' not found
, update Torch7 to the latest version viacd <TORCH_DIR> && bash ./update.sh
then re-install everything.
CIFAR 10/100
You can train theOR-WideResNet model (converted from WideResNet by simply replacing Conv layers with ORConv layers) on CIFAR dataset withWRN.
dataset=cifar10_original.t7 model=or-wrn widen_factor=4 depth=40 ./scripts/train_cifar.sh
With exactly the same settings, ORN-augmented WideResNet achieves state-of-the-art result while using significantly fewer parameters.
Network | Params | CIFAR-10 (ZCA) | CIFAR-10 (mean/std) | CIFAR-100 (ZCA) | CIFAR-100 (mean/std) |
---|---|---|---|---|---|
DenseNet-100-12-dropout | 7.0M | - | 4.10 | - | 20.20 |
DenseNet-190-40-dropout | 25.6M | - | 3.46 | - | 17.18 |
WRN-40-4 | 8.9M | 4.97 | 4.53 | 22.89 | 21.18 |
WRN-28-10-dropout | 36.5M | 4.17 | 3.89 | 20.50 | 18.85 |
WRN-40-10-dropout | 55.8M | - | 3.80 | - | 18.3 |
ORN-40-4(1/2) | 4.5M | 4.13 | 3.43 | 21.24 | 18.82 |
ORN-28-10(1/2)-dropout | 18.2M | 3.52 | 2.98 | 19.22 | 16.15 |
Table.1 Test error (%) on CIFAR10/100 dataset with flip/translation augmentation)
ImageNet
The effectiveness of ORN is further verified on large scale data. The OR-ResNet-18 model upgraded fromResNet-18 yields significant better performance when using similar parameters.
Network | Params | Top1-Error | Top5-Error |
---|---|---|---|
ResNet-18 | 11.7M | 30.614 | 10.98 |
OR-ResNet-18 | 11.4M | 28.916 | 9.88 |
Table.2 Validation error (%) on ILSVRC-2012 dataset.
You can usefacebook.resnet.torch to train theOR-ResNet-18 model from scratch or finetune it on your data by using thepre-trained weights.
-- To fill the model with the pre-trained weights:model=require('or-resnet.lua')({tensorType='torch.CudaTensor',pretrained='or-resnet18_weights.t7'})
A more specific demo notebook of using the pre-trained OR-ResNet to classify images can be foundhere.
Please check thepytorch-v2 branch for more details.
Thepytorch branch contains:
- the officialpytorch implementation of ORN(alpha version supports 1x1/3x3 ARFs with 4/8 orientation channels only).
- theMNIST-Variants demo.
Please follow the instruction below to install it and run the experiment demo.
- Linux (tested on ubuntu 14.04LTS)
- NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
- PyTorch
install the dependencies (required by the demo code):
clone the pytorch branch:
# git version must be greater than 1.9.10git clone https://github.com/ZhouYanzhao/ORN.git -b pytorch --single-branch ORN.pytorchcd ORN.pytorchexport DIR=$(pwd)
install ORN:
cd$DIR/installbash install.sh
run the MNIST-Variants demo:
cd$DIR/demo# train ORN on MNIST-rotpython main.py --use-arf# train baseline CNNpython main.py
Thecaffe branch contains:
- the officialcaffe implementation of ORN(alpha version supports 1x1/3x3 ARFs with 4/8 orientation channels only).
- theMNIST-Variants demo.
Please follow the instruction below to install it and run the experiment demo.
- Linux (tested on ubuntu 14.04LTS)
- NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
- Caffe
install the dependency (required by the demo code):
- idx2numpy:
pip install idx2numpy
- idx2numpy:
clone the caffe branch:
# git version must be greater than 1.9.10git clone https://github.com/ZhouYanzhao/ORN.git -b caffe --single-branch ORN.caffecd ORN.caffeexport DIR=$(pwd)
install ORN:
# modify Makefile.config first# compile ORN.caffemake clean&& make -j"$(nproc)" all
run the MNIST-Variants demo:
cd$DIR/examples/mnistbash get_mnist.sh# train ORN & CNN on MNIST-rotbash train.sh
Due to implementation differences,
- upgrading Conv layers to ORConv layers can be done by adding an
orn_param
- num_output of ORConv layers should be multipied by nOrientation of ARFs
Example:
layer {type:"Convolution"name:"ORConv"bottom: "Data" top: "ORConv"# add this line to replace regular filters with ARFsorn_param {orientations:8}param { lr_mult:1 decay_mult: 2}convolution_param {# this means 10 ARF feature mapsnum_output:80kernel_size:3stride:1pad:0weight_filler { type:"msra"}bias_filler { type:"constant"value: 0}}}
Check the MNIST demoprototxt (and itsvisualization) for more details.
If you use the code in your research, please cite:
@INPROCEEDINGS{Zhou2017ORN,author ={Zhou, Yanzhao and Ye, Qixiang and Qiu, Qiang and Jiao, Jianbin},title ={Oriented Response Networks},booktitle ={CVPR},year ={2017}}
About
Oriented Response Networks, in CVPR 2017
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.