Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

PSANet: Point-wise Spatial Attention Network for Scene Parsing, ECCV2018.

NotificationsYou must be signed in to change notification settings

hszhao/PSANet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

by Hengshuang Zhao*, Yi Zhang*, Shu Liu, Jianping Shi, Chen Change Loy, Dahua Lin, Jiaya Jia, details are inproject page.

Introduction

This repository is build for PSANet, which contains source code for PSA module and related evaluation code. For installation, please merge the related layers and follow the description inPSPNet repository (test with CUDA 7.0/7.5 + cuDNN v4).

PyTorch Version

Highly optimized PyTorch codebases available for semantic segmentation in repo:semseg, including full training and testing codes forPSPNet andPSANet.

Usage

  1. Clone the repository recursively:

    git clone --recursive https://github.com/hszhao/PSANet.git
  2. Merge the caffe layers into PSPNet repository:

    Point-wise spatial attention: pointwise_spatial_attention_layer.hpp/cpp/cu and caffe.proto.

  3. Build Caffe and matcaffe:

    cd$PSANET_ROOT/PSPNetcp Makefile.config.example Makefile.configvim Makefile.configmake -j8&& make matcaffecd ..
  4. Evaluation:

    • Evaluation code is in folder 'evaluation'.

    • Download trained models and put them in related dataset folder under 'evaluation/model', refer 'README.md'.

    • Modify the related paths in 'eval_all.m':

      Mainly variables 'data_root' and 'eval_list', and your image list for evaluation should be similarity to that in folder 'evaluation/samplelist' if you use this evaluation code structure.

    cd evaluationvim eval_all.m
    • Run the evaluation scripts:
    ./run.sh
  5. Results:

    Predictions will show in folder 'evaluation/mc_result' and the expected scores are listed as below:

    (mIoU/pAcc. stands for mean IoU and pixel accuracy, 'ss' and 'ms' denote single scale and multiple scale testing.)

    ADE20K:

    networktraining datatesting datamIoU/pAcc.(ss)mIoU/pAcc.(ms)md5sum
    PSANet50trainval41.92/80.1742.97/80.92a8e884
    PSANet101trainval42.75/80.7143.77/81.51ab5e56

    VOC2012:

    networktraining datatesting datamIoU/pAcc.(ss)mIoU/pAcc.(ms)md5sum
    PSANet50train_augval77.24/94.8878.14/95.12d5fc37
    PSANet101train_augval78.51/95.1879.77/95.435d8c0f
    PSANet101COCO + train_aug + valtest-/-85.7/-3c6a69

    Cityscapes:

    networktraining datatesting datamIoU/pAcc.(ss)mIoU/pAcc.(ms)md5sum
    PSANet50fine_trainfine_val76.65/95.9977.79/96.2425c06a
    PSANet101fine_trainfine_val77.94/96.1079.05/96.303ac1bf
    PSANet101fine_trainfine_test-/-78.6/-3ac1bf
    PSANet101fine_train + fine_valfine_test-/-80.1/-1dfc91
  6. Demo video:

    • Video processed by PSANet (with PSPNet) onBDD dataset for drivable area segmentation:Video.

Citation

If PSANet is useful for your research, please consider citing:

@inproceedings{zhao2018psanet,  title={{PSANet}: Point-wise Spatial Attention Network for Scene Parsing},  author={Zhao, Hengshuang and Zhang, Yi and Liu, Shu and Shi, Jianping and Loy, Chen Change and Lin, Dahua and Jia, Jiaya},  booktitle={ECCV},  year={2018}}

Questions

Please contact 'hszhao@cse.cuhk.edu.hk' or 'zy217@ie.cuhk.edu.hk'

About

PSANet: Point-wise Spatial Attention Network for Scene Parsing, ECCV2018.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp