Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

SNE-RoadSeg for Freespace Detection in PyTorch, ECCV 2020

License

NotificationsYou must be signed in to change notification settings

hlwang1124/SNE-RoadSeg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

This is the official PyTorch implementation ofSNE-RoadSeg: Incorporating Surface Normal Information into Semantic Segmentation for Accurate Freespace Detection, accepted byECCV 2020. This is ourproject page.

In this repo, we provide the training and testing setup for theKITTI Road Dataset. We test our code in Python 3.7, CUDA 10.0, cuDNN 7 and PyTorch 1.1. We provideDockerfile to build the docker image we use.

Setup

Please setup the KITTI Road Dataset and pretrained weights according to the following folder structure:

SNE-RoadSeg |-- checkpoints |  |-- kitti |  |  |-- kitti_net_RoadSeg.pth |-- data |-- datasets |  |-- kitti |  |  |-- training |  |  |  |-- calib |  |  |  |-- depth_u16 |  |  |  |-- gt_image_2 |  |  |  |-- image_2 |  |  |-- validation |  |  |  |-- calib |  |  |  |-- depth_u16 |  |  |  |-- gt_image_2 |  |  |  |-- image_2 |  |  |-- testing |  |  |  |-- calib |  |  |  |-- depth_u16 |  |  |  |-- image_2 |-- examples ...

image_2,gt_image_2 andcalib can be downloaded from theKITTI Road Dataset. We implementdepth_u16 based on the LiDAR data provided in the KITTI Road Dataset, and it can be downloaded fromhere. Note thatdepth_u16 has theuint16 data format, and the real depth in meters can be obtained bydouble(depth_u16)/1000. Moreover, the pretrained weightskitti_net_RoadSeg.pth for our SNE-RoadSeg-152 can be downloaded fromhere.

Usage

Run an example

We provide one example inexamples. To run it, you only need to setup thecheckpoints folder as mentioned above. Then, run the following script:

bash ./scripts/run_example.sh

and you will seenormal.png,pred.png andprob_map.png inexamples.normal.png is the normal estimation by our SNE;pred.png is the freespace prediction by our SNE-RoadSeg; andprob_map.png is the probability map predicted by our SNE-RoadSeg.

Testing for KITTI submission

For KITTI submission, you need to setup thecheckpoints and thedatasets/kitti/testing folder as mentioned above. Then, run the following script:

bash ./scripts/test.sh

and you will get the prediction results intestresults. After that you can follow thesubmission instructions to transform the prediction results into the BEV perspective for submission.

If everything works fine, you will get a MaxF score of96.74 forURBAN. Note that this is our re-implemented weights, and it is very similar to the reported ones in the paper (a MaxF score of96.75 forURBAN).

Training on the KITTI dataset

For training, you need to setup thedatasets/kitti folder as mentioned above. You can split the original training set into a new training set and a validation set as you like. Then, run the following script:

bash ./scripts/train.sh

and the weights will be saved incheckpoints and the tensorboard record containing the loss curves as well as the performance on the validation set will be save inruns. Note thatuse-sne intrain.sh controls if we will use our SNE model, and the default is True. If you delete it, our RoadSeg will take depth images as input, and you also need to deleteuse-sne intest.sh to avoid errors when testing.

Citation

If you use this code for your research, please cite our paper.

@inproceedings{fan2020sne,  title        = {{SNE-RoadSeg}: Incorporating surface normal information into semantic segmentation for accurate freespace detection},  author       = {Fan, Rui and Wang, Hengli and Cai, Peide and Liu, Ming},  booktitle    = {European Conference on Computer Vision},  pages        = {340--356},  year         = {2020},  organization = {Springer}}

Acknowledgement

Our code is inspired bypytorch-CycleGAN-and-pix2pix, and we thankJun-Yan Zhu for their great work.


[8]ページ先頭

©2009-2025 Movatter.jp