Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Code Repo for "Single View Stereo Matching" (CVPR'18 Spotlight)

NotificationsYou must be signed in to change notification settings

lawy623/SVS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repo includes the source code of the paper:"Single View Stereo Matching" (CVPR'18 Spotlight) by Yue Luo*,Jimmy Ren*, Mude Lin, Jiahao Pang, Wenxiu Sun, Hongsheng Li, Liang Lin.

Contact: Yue Luo (lawy623@gmail.com)

Prerequisites

The code is tested on 64 bit Linux (Ubuntu 14.04 LTS). You should also install Matlab (We have tested on R2015a). We have tested our code on GTX TitanX with CUDA8.0+cuDNNv5. Please install all these prerequisites before running our code.

Installation

  1. Get the code.

    git clone https://github.com/lawy623/SVS.gitcd SVS
  2. Build the code. Please followCaffe instruction to install all necessary packages and build it.

    cd caffe/# Modify Makefile.config according to your Caffe installation/. Remember to allow CUDA and CUDNN.make -j8make matcaffe
  3. Prepare data. We write all data and labels into.mat files.

  • Please go to directorydata/, and runget_data.sh to downloadKitti Stereo 2015 andKitti Raw datasets.
  • To create the.mat files, please go to directorydata, and run the matlab scriptsprepareTrain.m andprepareTest.m respectively. It will take some time to prepare data.
  • If you only want to test our models, you can simply downloads the Eigen test file at [GoogleDrive|BaiduPan]. Put this test .mat file in/data/testing/.

Training

View Synthesis Network

  • As described in our paper, we develop our View Synthesis Network based on theDeep3D method. Directly training the final model indicated in our paper using VGG16 initialization will easily sink into local optimum. We first keep the BatchNorm layers and train the model with VGG16 initialization. Go totraining/ to runtrain_viewSyn.m. You can also run the matlab scripts from terminal at directorytraining/ by following commands. By default matlab is installed under/usr/local/MATLAB/R2015a. If the location of your matlab is not the same, please modifytrain_ViewSyn.sh if want to run the scripts from terminal. Download the VGG16 at [GoogleDrive|BaiduPan], and put it undertraining/prototxt/viewSynthesis_BN/preModel/ before finetuning. We train such a BN model for roughly 30k iterations.
## To run the training matlab scripts from terminal   sh prototxt/viewSynthesis/train_ViewSyn.sh#To trained the view synthesis network
  • We further remove the Batch Norm layers and obtain a better performance. Rename the trained BN model (intraining/prototxt/viewSynthesis_BN/caffemodel) mentioned above asviewSyn_BN.caffemodel. Or you can directly download ours at [GoogleDrive|BaiduPan] and place it in the correct place. Changeline 11 oftrain_viewSyn.m to be‘model = param.model(2);’, and runtrain_viewSyn.m again.

Stereo Matching Network

  • We do not provide the training code for training this stereo matching network. We followCRL and use their trained model. Relevant model settings can be found intraining/prototxt/stereo/.

Single View Stereo Matching - End-to-end finetune.

  • To finetune our svs model, please first download the pretrain models for two sub-networks.Download View Synthesis Network at [GoogleDrive|BaiduPan], and put it undertraining/prototxt/viewSynthesis/caffemodel/.Download Stereo Matching Network. You can download the model trained on FlyingThings synthetic dataset at [GoogleDrive|BaiduPan], and a model further finetuned on Kitti Stereo 2015 at [GoogleDrive|BaiduPan]. Put the downloaded models undertraining/prototxt/stereo/caffemodel/
  • Go totraining/ to runtrain_svs.m. You can also run the matlab scripts from terminal at directorytraining/ by following commands.
## To run the training matlab scripts from terminal   sh prototxt/svs/train_svs.sh#To trained the svs network

Testing

  • Downloads the Eigen test file at [GoogleDrive|BaiduPan]. Put this test .mat file in/data/testing/. Or you can follow the data preparation step mentioned above.Download svs model at [GoogleDrive|BaiduPan], and put it undertraining/prototxt/svs/caffemodel/.
  • Go to directorytesting/.Runtest_svs.m to get the result before finetune. Please make sure to have downloaded the trained View Synthesis Network and Stereo Matching Network.Runtest_svs_end2end.m to get our state-of-the-art result on monocular depth estimation.
  • If you want to get some visible results, changeline 4 oftest_svs.m ortest_svs_end2end.m to be‘visual = 1;’.

Results

  • Some of our qualitative results are shown here.

  • We provide the quantitative results on KITTI Eigen Test Set (697 imgs). Download ithere.

Citation

Please cite our paper if you find it useful for your work:

@InProceedings{Luo2018SVS,    title={Single View Stereo Matching},    author={Yue Luo, Jimmy Ren, Mude Lin, Jiahao Pang, Wenxiu Sun, Hongsheng Li, Liang Lin},    booktitle ={CVPR},    year={2018},}

About

Code Repo for "Single View Stereo Matching" (CVPR'18 Spotlight)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp