Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Oriented Response Networks, in CVPR 2017

License

NotificationsYou must be signed in to change notification settings

ZhouYanzhao/ORN

Repository files navigation

[Home][Project][Paper][Supp][Poster]

illustration

🎉Update: Reimplemented ORN that supports modern PyTorch.

  • Tested with PyTorch 1.12.0 (Ubuntu / GTX 2080 Ti)
  • A New helper functionupgrade_to_orn for easy model conversion.
  • Predefined ORN-upgraded models (OR-VGG, OR-ResNet, OR-Inception, OR-WRN, etc.).

Please check thepytorch-v2 branch for more details.

Torch Implementation

Thetorch branch contains:

  • the officialtorch implementation of ORN.
  • theMNIST-Variants demo.

Please follow the instruction below to install it and run the experiment demo.

Prerequisites

  • Linux (tested on ubuntu 14.04LTS)
  • NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
  • Torch7

Getting started

You can setup everything via a single commandwget -O - https://git.io/vHCMI | bashor do it manually in case something goes wrong:

  1. install the dependencies (required by the demo code):

  2. clone the torch branch:

    # git version must be greater than 1.9.10git clone https://github.com/ZhouYanzhao/ORN.git -b torch --single-branch ORN.torchcd ORN.torchexport DIR=$(pwd)
  3. install ORN:

    cd$DIR/install# install the CPU/GPU/CuDNN version ORN.bash install.sh
  4. unzip the MNIST dataset:

    cd$DIR/demo/datasetsunzip MNIST
  5. run the MNIST-Variants demo:

    cd$DIR/demo# you can modify the script to test different hyper-parametersbash ./scripts/Train_MNIST.sh

Trouble shooting

If you run into'cudnn.find' not found, update Torch7 to the latest version viacd <TORCH_DIR> && bash ./update.sh then re-install everything.

More experiments

CIFAR 10/100

You can train theOR-WideResNet model (converted from WideResNet by simply replacing Conv layers with ORConv layers) on CIFAR dataset withWRN.

dataset=cifar10_original.t7 model=or-wrn widen_factor=4 depth=40 ./scripts/train_cifar.sh

With exactly the same settings, ORN-augmented WideResNet achieves state-of-the-art result while using significantly fewer parameters.

CIFAR

NetworkParamsCIFAR-10 (ZCA)CIFAR-10 (mean/std)CIFAR-100 (ZCA)CIFAR-100 (mean/std)
DenseNet-100-12-dropout7.0M-4.10-20.20
DenseNet-190-40-dropout25.6M-3.46-17.18
WRN-40-48.9M4.974.5322.8921.18
WRN-28-10-dropout36.5M4.173.8920.5018.85
WRN-40-10-dropout55.8M-3.80-18.3
ORN-40-4(1/2)4.5M4.133.4321.2418.82
ORN-28-10(1/2)-dropout18.2M3.522.9819.2216.15

Table.1 Test error (%) on CIFAR10/100 dataset with flip/translation augmentation)

ImageNet

ILSVRC2012

The effectiveness of ORN is further verified on large scale data. The OR-ResNet-18 model upgraded fromResNet-18 yields significant better performance when using similar parameters.

NetworkParamsTop1-ErrorTop5-Error
ResNet-1811.7M30.61410.98
OR-ResNet-1811.4M28.9169.88

Table.2 Validation error (%) on ILSVRC-2012 dataset.

You can usefacebook.resnet.torch to train theOR-ResNet-18 model from scratch or finetune it on your data by using thepre-trained weights.

-- To fill the model with the pre-trained weights:model=require('or-resnet.lua')({tensorType='torch.CudaTensor',pretrained='or-resnet18_weights.t7'})

A more specific demo notebook of using the pre-trained OR-ResNet to classify images can be foundhere.

PyTorch Implementation (Deprecated)

Please check thepytorch-v2 branch for more details.

Thepytorch branch contains:

  • the officialpytorch implementation of ORN(alpha version supports 1x1/3x3 ARFs with 4/8 orientation channels only).
  • theMNIST-Variants demo.

Please follow the instruction below to install it and run the experiment demo.

Prerequisites

  • Linux (tested on ubuntu 14.04LTS)
  • NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
  • PyTorch

Getting started

  1. install the dependencies (required by the demo code):

  2. clone the pytorch branch:

    # git version must be greater than 1.9.10git clone https://github.com/ZhouYanzhao/ORN.git -b pytorch --single-branch ORN.pytorchcd ORN.pytorchexport DIR=$(pwd)
  3. install ORN:

    cd$DIR/installbash install.sh
  4. run the MNIST-Variants demo:

    cd$DIR/demo# train ORN on MNIST-rotpython main.py --use-arf# train baseline CNNpython main.py

Caffe Implementation

Thecaffe branch contains:

  • the officialcaffe implementation of ORN(alpha version supports 1x1/3x3 ARFs with 4/8 orientation channels only).
  • theMNIST-Variants demo.

Please follow the instruction below to install it and run the experiment demo.

Prerequisites

  • Linux (tested on ubuntu 14.04LTS)
  • NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
  • Caffe

Getting started

  1. install the dependency (required by the demo code):

  2. clone the caffe branch:

    # git version must be greater than 1.9.10git clone https://github.com/ZhouYanzhao/ORN.git -b caffe --single-branch ORN.caffecd ORN.caffeexport DIR=$(pwd)
  3. install ORN:

    # modify Makefile.config first# compile ORN.caffemake clean&& make -j"$(nproc)" all
  4. run the MNIST-Variants demo:

    cd$DIR/examples/mnistbash get_mnist.sh# train ORN & CNN on MNIST-rotbash train.sh

Note

Due to implementation differences,

  • upgrading Conv layers to ORConv layers can be done by adding anorn_param
  • num_output of ORConv layers should be multipied by nOrientation of ARFs

Example:

layer {type:"Convolution"name:"ORConv"bottom: "Data" top: "ORConv"# add this line to replace regular filters with ARFsorn_param {orientations:8}param { lr_mult:1 decay_mult: 2}convolution_param {# this means 10 ARF feature mapsnum_output:80kernel_size:3stride:1pad:0weight_filler { type:"msra"}bias_filler { type:"constant"value: 0}}}

Check the MNIST demoprototxt (and itsvisualization) for more details.

Citation

If you use the code in your research, please cite:

@INPROCEEDINGS{Zhou2017ORN,author ={Zhou, Yanzhao and Ye, Qixiang and Qiu, Qiang and Jiao, Jianbin},title ={Oriented Response Networks},booktitle ={CVPR},year ={2017}}

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp