Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

License

NotificationsYou must be signed in to change notification settings

Jingkang50/ICCV21_SCOOD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

paper projectpage gdrive onedrive

This repository is the official implementation of the paper:

Semantically Coherent Out-of-Distribution Detection
Jingkang Yang, Haoqi Wang, Litong Feng, Xiaopeng Yan, Huabin Zheng, Wayne Zhang, Ziwei Liu
Proceedings of the IEEE International Conference on Computer Vision (ICCV 2021)

udg

Dependencies

We useconda to manage our dependencies, and CUDA 10.1 to run our experiments.

You can specify the appropriatecudatoolkit version to install on your machine in theenvironment.yml file, and then run the following to create theconda environment:

conda env create -f environment.ymlconda activate scood

SC-OOD Dataset

scood

The SC-OOD dataset introduced in the paper can be downloaded here.

gdrive onedrive

Our codebase accesses the dataset from the root directory in a folder nameddata/ by default, i.e.

├── ...├── data│   ├── images│   └── imglist├── scood├── test.py├── train.py├── ...

Training

The entry point for training is thetrain.py script. The hyperparameters for each experiment is specified by a.yml configuration file (examples given inconfigs/train/).

All experiment artifacts are saved in the specifiedargs.output_dir directory.

python train.py \    --config configs/train/cifar10_udg.yml \    --data_dir data \    --output_dir output/cifar10_udg

Testing

Evaluation for a trained model is performed by thetest.py script, with its hyperparameters also specified by a.yml configuration file (examples given inconfigs/test/)

Within the configuration file, you can also specify which post-processing OOD method to use (e.g. ODIN or Energy-based OOD detector (EBO)).

The evaluation results are saved in a.csv file as specified.

python test.py \    --config configs/test/cifar10.yml \    --checkpoint output/cifar10_udg/best.ckpt \    --data_dir data \    --csv_path output/cifar10_udg/results.csv

Results

We report the mean ± std results from the current codebase as follows, which match the performance reported in our original paper.

CIFAR-10 (+ Tiny-ImageNet) Results

You can run the following script (specifying the data and output directories) which perform training + testing for our main experimental results:

CIFAR-10, UDG

bash scripts/cifar10_udg.sh data_dir output_dir

CIFAR-10 (+ Tiny-ImageNet), ResNet18

MetricsODINEBOOEUDG (ours)
FPR95 ↓50.76 ± 3.3950.70 ± 2.8654.99 ± 4.0639.94 ± 3.77
AUROC ↑82.11 ± 0.2483.99 ± 1.0587.48 ± 0.6193.27 ± 0.64
AUPR In ↑73.07 ± 0.4076.84 ± 1.5685.75 ± 1.7093.36 ± 0.56
AUPR Out ↑85.06 ± 0.2985.44 ± 0.7386.95 ± 0.2891.21 ± 1.23
CCR@FPRe-4 ↑0.30 ± 0.040.26 ± 0.097.09 ± 0.4816.36 ± 4.33
CCR@FPRe-3 ↑1.22 ± 0.281.46 ± 0.1813.69 ± 0.7832.99 ± 4.16
CCR@FPRe-2 ↑6.13 ± 0.728.17 ± 0.9629.60 ± 5.3159.14 ± 2.60
CCR@FPRe-1 ↑39.61 ± 0.7247.57 ± 3.3364.33 ± 3.4481.04 ± 1.46

CIFAR-10 (+ Tiny-ImageNet), DenseNet

MetricsODINEBOOEUDG (ours)
FPR95 ↓51.75 ± 4.2251.11 ± 3.6763.83 ± 8.7343.29 ± 3.37
AUROC ↑86.68 ± 1.7486.56 ± 1.3783.59 ± 3.1491.8 ± 0.65
AUPR In ↑83.35 ± 2.3684.05 ± 1.7581.78 ± 3.1691.12 ± 0.83
AUPR Out ↑87.1 ± 1.5386.19 ± 1.2682.21 ± 3.5190.73 ± 0.65
CCR@FPRe-4 ↑1.53 ± 0.812.08 ± 1.072.57 ± 0.838.63 ± 1.86
CCR@FPRe-3 ↑5.33 ± 1.356.98 ± 1.467.46 ± 1.6619.95 ± 1.95
CCR@FPRe-2 ↑20.35 ± 3.5723.13 ± 2.9221.97 ± 3.645.93 ± 3.33
CCR@FPRe-1 ↑60.36 ± 4.4760.01 ± 3.0656.67 ± 5.5376.53 ± 1.23

CIFAR-10 (+ Tiny-ImageNet), WideResNet

MetricsODINEBOOEUDG (ours)
FPR95 ↓45.04 ± 10.538.99 ± 2.7143.85 ± 2.6834.11 ± 1.77
AUROC ↑84.81 ± 6.8489.94 ± 2.7791.02 ± 0.5494.25 ± 0.2
AUPR In ↑77.12 ± 11.785.39 ± 5.7389.86 ± 0.793.93 ± 0.12
AUPR Out ↑87.65 ± 4.4890.21 ± 1.8190.11 ± 0.7393.39 ± 0.29
CCR@FPRe-4 ↑2.86 ± 3.843.88 ± 5.099.58 ± 1.1513.8 ± 0.7
CCR@FPRe-3 ↑8.27 ± 10.7710.05 ± 12.3218.67 ± 1.729.26 ± 1.82
CCR@FPRe-2 ↑19.56 ± 21.8523.58 ± 20.6739.35 ± 2.6656.9 ± 1.73
CCR@FPRe-1 ↑49.13 ± 24.5667.91 ± 10.6174.7 ± 1.5483.88 ± 0.2

CIFAR-100 (+ Tiny-ImageNet) Results

You can run the following script (specifying the data and output directories) which perform training + testing for our main experimental results:

CIFAR-100, UDG

bash scripts/cifar100_udg.sh data_dir output_dir

CIFAR-100 (+ Tiny-ImageNet), ResNet18

MetricsODINEBOOEUDG (ours)
FPR95 ↓79.87 ± 0.6878.93 ± 1.3981.53 ± 0.8681.35 ± 0.42
AUROC ↑78.73 ± 0.2880.1 ± 0.4678.67 ± 0.4675.52 ± 0.87
AUPR In ↑79.22 ± 0.2881.49 ± 0.3980.84 ± 0.3374.49 ± 1.89
AUPR Out ↑73.37 ± 0.4973.72 ± 0.4471.75 ± 0.5271.25 ± 0.57
CCR@FPRe-4 ↑1.64 ± 0.512.55 ± 0.54.65 ± 0.551.22 ± 0.39
CCR@FPRe-3 ↑5.91 ± 0.67.71 ± 1.0211.07 ± 0.434.58 ± 0.68
CCR@FPRe-2 ↑18.74 ± 0.8722.58 ± 0.823.26 ± 0.3314.89 ± 1.36
CCR@FPRe-1 ↑46.92 ± 0.1550.2 ± 0.6246.73 ± 0.7339.94 ± 1.68

CIFAR-100 (+ Tiny-ImageNet), DenseNet

MetricsODINEBOOEUDG (ours)
FPR95 ↓83.68 ± 0.5782.18 ± 1.2386.71 ± 2.2580.67 ± 2.6
AUROC ↑73.74 ± 0.8476.9 ± 0.8970.74 ± 2.9575.54 ± 1.69
AUPR In ↑73.06 ± 1.0977.45 ± 1.1670.74 ± 3.075.65 ± 2.13
AUPR Out ↑69.2 ± 0.6570.8 ± 0.7866.33 ± 2.6370.99 ± 1.62
CCR@FPRe-4 ↑0.55 ± 0.061.33 ± 0.531.28 ± 0.331.68 ± 0.3
CCR@FPRe-3 ↑2.94 ± 0.164.88 ± 0.823.81 ± 0.685.89 ± 1.43
CCR@FPRe-2 ↑11.12 ± 1.115.53 ± 1.3611.29 ± 1.9116.41 ± 1.8
CCR@FPRe-1 ↑35.98 ± 1.3742.44 ± 1.3331.71 ± 2.7340.28 ± 2.37

CIFAR-100 (+ Tiny-ImageNet), WideResNet

MetricsODINEBOOEUDG (ours)
FPR95 ↓79.59 ± 1.3678.86 ± 1.7080.08 ± 2.8076.03 ± 2.82
AUROC ↑77.45 ± 0.7780.13 ± 0.5679.24 ± 2.4079.78 ± 1.41
AUPR In ↑75.25 ± 1.2080.18 ± 0.5780.24 ± 3.0379.96 ± 2.02
AUPR Out ↑73.2 ± 0.7773.71 ± 0.5873.14 ± 2.1974.77 ± 1.21
CCR@FPRe-4 ↑0.43 ± 0.210.58 ± 0.252.39 ± 0.741.47 ± 1.08
CCR@FPRe-3 ↑2.31 ± 0.603.46 ± 0.807.97 ± 1.475.43 ± 2.09
CCR@FPRe-2 ↑11.01 ± 1.2917.55 ± 1.2421.97 ± 2.9218.88 ± 3.53
CCR@FPRe-1 ↑43.2 ± 1.8051.54 ± 0.6549.36 ± 3.9848.95 ± 1.91

Note:The work was originally built on the company's own deep learning framework, based on which we report all the results in the paper.We extracted all related code and built this standalone version for release, and checked that most of the results can be reproduced.We noticed that CIFAR-10 can easily match the paper results, but CIFAR-100 benchmark might have a few differences, perhaps due to some minor difference in framework modules and some randomness.We are currently enhancing our codebase and exploring udg on large-scale datasets.

License and Acknowledgements

This project is open-sourced under the MIT license.

The codebase is refactored by Ang Yi Zhe, and maintained by Jingkang Yang and Ang Yi Zhe.

Citation

If you find our repository useful for your research, please consider citing our paper:

@InProceedings{yang2021scood,author ={Yang, Jingkang and Wang, Haoqi and Feng, Litong and Yan, Xiaopeng and Zheng, Huabin and Zhang, Wayne and Liu, Ziwei},title ={Semantically Coherent Out-of-Distribution Detection},booktitle ={Proceedings of the IEEE International Conference on Computer Vision},year ={2021}}

About

The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp