Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

[NeurIPS2021] Code Release of K-Net: Towards Unified Image Segmentation

License

NotificationsYou must be signed in to change notification settings

ZwwWayne/K-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PWC

Introduction

This is an official release of the paperK-Net:Towards Unified Image Segmentation. K-Net will also be integrated in the future release of MMDetection and MMSegmentation.

K-Net:Towards Unified Image Segmentation,
Wenwei Zhang, Jiangmiao Pang, Kai Chen, Chen Change Loy
In: Proc. Advances in Neural Information Processing Systems (NeurIPS), 2021
[arXiv][project page][Bibetex]

Results

The results of K-Net and their corresponding configs on each segmentation task are shown as below.We have released the full model zoo of panoptic segmentation.The complete model checkpoints and logs for instance and semantic segmentation will be released soon.

Semantic Segmentation on ADE20K

BackboneMethodCrop SizeLr SchdmIoUConfigDownload
R-50K-Net + FCN512x51280K43.3configmodel |log
R-50K-Net + PSPNet512x51280K43.9configmodel |log
R-50K-Net + DeepLabv3512x51280K44.6configmodel |log
R-50K-Net + UPerNet512x51280K43.6configmodel |log
Swin-TK-Net + UPerNet512x51280K45.4configmodel |log
Swin-LK-Net + UPerNet512x51280K52.0configmodel |log
Swin-LK-Net + UPerNet640x64080K52.7configmodel |log

Instance Segmentation on COCO

BackboneMethodLr SchdMask mAPConfigDownload
R-50K-Net1x34.0configmodel |log
R-50K-Netms-3x37.8configmodel |log
R-101K-Netms-3x39.2configmodel |log
R-101-DCNK-Netms-3x40.5configmodel |log

Panoptic Segmentation on COCO

BackboneMethodLr SchdPQConfigDownload
R-50K-Net1x44.3configmodel |log
R-50K-Netms-3x47.1configmodel |log
R-101K-Netms-3x48.4configmodel |log
R-101-DCNK-Netms-3x49.6configmodel |log
Swin-L (window size 7)K-Netms-3x54.6configmodel |log
Above on test-dev55.2

Installation

It requires the following OpenMMLab packages:

  • MIM >= 0.1.5
  • MMCV-full >= v1.3.14
  • MMDetection >= v2.17.0
  • MMSegmentation >= v0.18.0
  • scipy
  • panopticapi
pip install openmim scipy mmdet mmsegmentationpip install git+https://github.com/cocodataset/panopticapi.gitmim install mmcv-full

License

This project is released under theApache 2.0 license.

Usage

Data preparation

Prepare data followingMMDetection andMMSegmentation. The data structure looks like below:

data/├── ade│   ├── ADEChallengeData2016│   │   ├── annotations│   │   ├── images├── coco│   ├── annotations│   │   ├── panoptic_{train,val}2017.json│   │   ├── instance_{train,val}2017.json│   │   ├── panoptic_{train,val}2017/  # panoptic png annotations│   │   ├── image_info_test-dev2017.json  # for test-dev submissions│   ├── train2017│   ├── val2017│   ├── test2017

Training and testing

For training and testing, you can directly use mim to train and test the model

# train instance/panoptic segmentation modelssh ./tools/mim_slurm_train.sh$PARTITION mmdet$CONFIG$WORK_DIR# test instance segmentation modelssh ./tools/mim_slurm_test.sh$PARTITION mmdet$CONFIG$CHECKPOINT --eval segm# test panoptic segmentation modelssh ./tools/mim_slurm_test.sh$PARTITION mmdet$CONFIG$CHECKPOINT --eval pq# train semantic segmentation modelssh ./tools/mim_slurm_train.sh$PARTITION mmseg$CONFIG$WORK_DIR# test semantic segmentation modelssh ./tools/mim_slurm_test.sh$PARTITION mmseg$CONFIG$CHECKPOINT --eval mIoU

For test submission for panoptic segmentation, you can use the command below:

# we should update the category information in the original image test-dev pkl file# for panoptic segmentationpython -u tools/gen_panoptic_test_info.py# run test-dev submissionsh ./tools/mim_slurm_test.sh$PARTITION mmdet$CONFIG$CHECKPOINT  --format-only --cfg-options data.test.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json data.test.img_prefix=data/coco/test2017 --eval-options jsonfile_prefix=$WORK_DIR

You can also run training and testing without slurm by directly using mim for instance/semantic/panoptic segmentation like below:

PYTHONPATH='.':$PYTHONPATH mim train mmdet$CONFIG$WORK_DIRPYTHONPATH='.':$PYTHONPATH mim train mmseg$CONFIG$WORK_DIR
  • PARTITION: the slurm partition you are using
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CONFIG: the config files under the directoryconfigs/
  • JOB_NAME: the name of the job that are necessary for slurm

Citation

@inproceedings{zhang2021knet,title={{K-Net: Towards} Unified Image Segmentation},author={Wenwei Zhang and Jiangmiao Pang and Kai Chen and Chen Change Loy},year={2021},booktitle={NeurIPS},}

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp