Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

[ECCV2022 Oral] Registration based Few-Shot Anomaly Detection

License

NotificationsYou must be signed in to change notification settings

MediaBrain-SJTU/RegAD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is an official implementation of “Registration based Few-Shot Anomaly Detection” (RegAD) with PyTorch, accepted by ECCV 2022 (Oral).

Paper Link

@inproceedings{huang2022regad,  title={Registration based Few-Shot Anomaly Detection}  author={Huang, Chaoqin and Guan, Haoyan and Jiang, Aofan and Zhang, Ya and Spratlin, Michael and Wang, Yanfeng},  booktitle={European Conference on Computer Vision (ECCV)},  year={2022}}

Abstract: This paper considers few-shot anomaly detection (FSAD), a practical yet under-studied setting for anomaly detection (AD), where only a limited number of normal images are provided for each category at training. So far, existing FSAD studies follow the one-model-per-category learning paradigm used for standard AD, and the inter-category commonality has not been explored. Inspired by how humans detect anomalies, i.e., comparing an image in question to normal images, we here leverage registration, an image alignment task that is inherently generalizable across categories, as the proxy task, to train a category-agnostic anomaly detection model. During testing, the anomalies are identified by comparing the registered features of the test image and its corresponding support (normal) images. As far as we know, this is the first FSAD method that trains a single generalizable model and requires no re-training or parameter fine-tuning for new categories.

Keywords: Anomaly Detection, Few-Shot Learning, Registration

Get Started

Environment

  • python >= 3.7.11
  • pytorch >= 1.11.0
  • torchvision >= 0.12.0
  • numpy >= 1.19.5
  • scipy >= 1.7.3
  • skimage >= 0.19.2
  • matplotlib >= 3.5.2
  • kornia >= 0.6.5
  • tqdm

Files Preparation

  1. Download the MVTec datasethere.
  2. Download the support dataset for few-shot anomaly detection onGoogle Drive orBaidu Disk (i9rx)and unzip the dataset. For those who have problem downloading the support set, please optional download categories of capsule and grid onBaidu Disk (pll9) andBaidu Disk (ns0n).
    tar -xvf support_set.tar
    We hope the followers could use these support datasets to make a fair comparison between different methods.
  3. Download the pre-train models onGoogle Drive orBaidu Disk (4qyo)and unzip the checkpoint files.
    tar -xvf save_checkpoints.tar

After the preparation work, the whole project should have the following structure:

./RegAD├── README.md├── train.py                                  # training code├── test.py                                   # testing code├── MVTec                                     # MVTec dataset files│   ├── bottle│   ├── cable│   ├── ...                  │   └── zippper├── support_set                               # MVTec support dataset files│   ├── 2│   ├── 4                 │   └── 8├── models                                    # models and backbones│   ├── stn.py  │   └── siamese.py├── losses                                    # losses│   └── norm_loss.py  ├── datasets                                  # dataset                      │   └── mvtec.py├── save_checkpoints                          # model checkpoint files                  └── utils                                     # utils    ├── utils.py    └── funcs.py

Quick Start

pythontest.py--obj $target-object--shot $few-shot-number--stn_moderotation_scale

For example, if run on the categorybottle withk=2:

pythontest.py--objbottle--shot2--stn_moderotation_scale

Training

pythontrain.py--obj $target-object--shot $few-shot-number--data_typemvtec--data_path ./MVTec/--epochs50--batch_size32--lr0.0001--momentum0.9--inferences10--stn_moderotation_scale

For example, to train a RegAD model on the MVTec dataset onbottle withk=2, simply run:

pythontrain.py--objbottle--shot2--data_typemvtec--data_path ./MVTec/--epochs50--batch_size32--lr0.0001--momentum0.9--inferences10--stn_moderotation_scale

Then you can run the evaluation using:

pythontest.py--objbottle--shot2--stn_moderotation_scale

Results

Results of few-shot anomaly detection and localization with k=2:

AUC (%)DetectionLocalization
K=2RegADInplementationRegADInplementation
bottle99.499.798.098.6
cable65.169.891.794.2
capsule67.568.697.397.6
carpet96.596.798.998.9
grid84.079.177.477.5
hazelnut96.096.398.198.2
leather99.410098.099.2
metal_nut91.494.296.998.0
pill81.366.193.697.0
screw52.553.994.494.1
tile94.398.994.395.1
toothbrush86.686.898.298.2
transistor86.082.293.493.3
wood99.299.893.596.5
zipper86.390.995.198.3
average85.785.594.695.6

Results of few-shot anomaly detection and localization with k=4:

AUC (%)DetectionLocalization
K=4RegADInplementationRegADInplementation
bottle99.499.398.498.5
cable76.182.992.795.5
capsule72.477.397.698.3
carpet97.997.998.998.9
grid91.28785.785.7
hazelnut95.895.998.098.4
leather10099.999.199
metal_nut94.694.397.896.5
pill80.874.097.497.4
screw56.659.395.096.0
tile95.598.294.992.6
toothbrush90.991.198.598.5
transistor85.285.593.893.5
wood98.698.994.796.3
zipper88.595.894.098.6
average88.289.295.896.2

Results of few-shot anomaly detection and localization with k=8:

AUC (%)DetectionLocalization
K=8RegADInplementationRegADInplementation
bottle99.899.897.598.5
cable80.681.594.995.8
capsule76.378.498.298.4
carpet98.598.698.998.9
grid91.591.588.788.7
hazelnut96.597.398.598.5
leather10010098.999.3
metal_nut98.398.696.998.3
pill80.677.897.897.7
screw63.465.897.197.3
tile97.499.695.296.1
toothbrush98.596.698.799.0
transistor93.490.396.895.9
wood99.499.594.696.5
zipper94.093.497.497.4
average91.291.296.897.1

Visualization

Acknowledgement

We borrow some codes fromSimSiam,STN andPaDiM

Contact

If you have any problem with this code, please feel free to contacthuangchaoqin@sjtu.edu.cn.

About

[ECCV2022 Oral] Registration based Few-Shot Anomaly Detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp