Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

This is the official repo for Dynamic Extension Nets for Few-shot Semantic Segmentation (ACM Multimedia 20).

NotificationsYou must be signed in to change notification settings

lizhaoliu-Lec/DENet

Repository files navigation

Created by Lizhao Liu, Junyi Cao and Minqian Liu from South China University of Technology.

This repository contains the official PyTorch-implementation of ourACM MM 2020 paperDynamic Extension Nets for Few-shot Semantic Segmentation. In particular, we release the code for training and testing the DENet and our re-implemented methods for few-shot semantic segmentation under {1, 2}-way and {1, 5}-shot settings.


Pretrained Models

For the convenience of relevant researchers to reproduce our results, we have release the pretrained models, tensorboard files and logs during training.They are all uploaded to the google drive:https://drive.google.com/uc?export=download&id=13SzrlZ6iTJL-pGhEDDrL-SjJuAGVhMid

The config files are inreproduce_configs.The run script is inrun.sh.

Note that, it may take a while to fully load the tensorboard files after running the tensorboard command.

The reproduced results are somehow better than the ones in the original paper.

COCO-20i
1-way 1-shot1-way 5-shot
fold0fold1fold2fold3meanfold0fold1fold2fold3mean
Paper42.9045.7842.1640.2242.7745.4044.8641.5740.2643.02
Reproduced43.1445.8442.0740.8042.9646.4644.9540.2840.9943.17

Usage

Environment

  • PyTorch 1.3.1
  • torchvision 0.4.2
  • tensorboardX for recording training information
  • tqdm for displaying the training progress
  • pyyaml for readingyml files
  • A full list of dependencies are inrequirements.txt, you can use
     pip install -r requirements.txt
    to download all the dependencies.

Dataset Preparation

  1. To save your trouble, you can directly download our preprocessed datasets via google drive:

  2. Enter the filedataset/voc_sbd.py and modify theROOT to the path where you want to save the dataset.

    # specify root path for the data pathROOT='/path/to/data'
  3. Run the functionrun_VOCSBDSegmentation5i_from_scratch() to download and process 4 subfolds of PASCAL-5i dataset.

  4. Enter the filedataset/coco.py and modify theROOT to the path where you want to save the dataset.

    # specify root path for the data pathROOT='/path/to/data'
  5. Run the functionrun_COCOStuff20i_from_scratch() to download and process 4 subfolds of COCO-20i dataset.

Configurations

Please enter theconfig directory, we already provide you with several configuration templates for training, testing and performing inference using our DENet and some other re-implemented models.Youneed to create three empty files inconfig directory:train_config.yml,test_config.yml andinference_config.yml. Then, you can copy the content in one of the template files to fill the correspondingyml file. For example, you can copy the content intrain_config_denet.template totrain_config.yml for training with DENet. You still need to specify the path to the dataset inyml files.

Note: The provided templates use PASCAL-5i as the default dataset, you can change the corresponding arguments to use COCO-20i dataset.

Training

After thetrain_config.yml file is appropriately configured, training is quite simple with just one line of script:

python -u train.py

All the records of training information will be automatically saved toruns/${MODEL_NAME}/${ID}.

Testing

After thetest_config.yml file is appropriately configured, you can run the following script to test the trained model:

python -u test.py

Likewise, all the records of testing information will be automatically saved toruns/${MODEL_NAME}/${ID}.

Inference

After theinference_config.yml file is appropriately configured, you can do inference with the trained model using the script:

python -u inference.py

The output images will be saved toresults/${MODEL_NAME}/${ID}.


Quantitative Results

  • All results below are in mIoU(%)
PASCAL-5i
1 way 1 shot1 way 5 shot2 way 1 shot2 way 5 shot
60.0860.4652.1753.62
COCO-20i
1 way 1 shot1 way 5 shot2 way 1 shot2 way 5 shot
42.7743.0238.5240.87

Qualitative Results

  • 1 way

  • 2 way

  • failure cases


Citation

If you find our work useful in your research, please consider citing:

@inproceedings{liu2020dynamic, title={Dynamic Extension Nets for Few-shot Semantic Segmentation},author={Liu, Lizhao and Cao, Junyi and Liu, Minqian and Guo, Yong and Chen, Qi and Tan, Mingkui}, booktitle={Proceedings of the 28th ACM International Conference on Multimedia},  year={2020}}

Acknowledgments

This work was partially supported by Science and Technology Program of Guangzhou, China under Grants 202007030007, Key-Area Research and Development Program of Guangdong Province 2019B010155001, National Natural Science Foundation of China (NSFC) 61836003 (key project), Guangdong 2017ZT07X183, Fundamental Research Funds for the Central Universities D2191240.

About

This is the official repo for Dynamic Extension Nets for Few-shot Semantic Segmentation (ACM Multimedia 20).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp