Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

[ECCV 2020] Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance

License

NotificationsYou must be signed in to change notification settings

ifnspaml/SGDepth

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Marvin Klingner,Jan-Aike Termöhlen, Jonas Mikolajczyk, andTim Fingscheidt – ECCV 2020

Link to paper

ECCV Presentation

SGDepth video presentation ECCV

Idea Behind the Method

Self-supervised monocular depth estimation usually relies on the assumption of a static world during training which is violated by dynamic objects.In our paper we introduce a multi-task learning framework that semantically guides the self-supervised depth estimation to handle such objects.

Citation

If you find our work useful or interesting, please consider citingour paper:

@inproceedings{klingner2020selfsupervised, title   = {{Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance}}, author  = {Marvin Klingner and            Jan-Aike Term\"{o}hlen and            Jonas Mikolajczyk and            Tim Fingscheidt           }, booktitle = {{European Conference on Computer Vision ({ECCV})}}, year = {2020}}

Improved Depth Estimation Results

As a consequence of the multi-task training, dynamic objects are more clearly shaped and small objects such as traffic signs or traffic lights are better recognised in comparison to previous methods.

Our models

ModelResolutionAbs RelSq RelRMSERMSE logδ < 1.25δ < 1.25^2δ < 1.25^3
SGDepth only depth640x1920.1170.9074.8440.1960.8750.9580.980
SGDepth full640x1920.1130.8354.6930.1910.8790.9610.981

Inference Preview:

Prerequisites and Requirements

We recommend to use Anaconda. Anenvironment.yml file is provided.We use PyTorch 1.1 with Cuda 10.0. Anrequirements.txt also exists. Older and newer versions of mentioned packagesmay also work. However, from pytorch 1.3 on, the default behaviour of some functions (e.g.grid_sample()) did change, which needs to be considered when training models with the newest pytorch releases.To start working with our code, do the following steps:

  1. In your project dir export the environment variable for the checkpoints:export IFN_DIR_CHECKPOINT=/path/to/folder/Checkpoints
  2. Download the Cityscapes dataset:https://www.cityscapes-dataset.com/ and put it in a folder "Dataset"
  3. Download KITTI dataset:http://www.cvlibs.net/datasets/kitti/ and place it in the same dataset folder. To ensure that you have the same folder structure as we have, you can directly use the scriptdataloader\data_preprocessing\download_kitti.py.
  4. If you want to evaluate on the KITTI 2015 stereo dataset, also download it fromhttp://www.cvlibs.net/datasets/kitti/ and apply thedataloader\data_preprocessing\kitti_2015_generate_depth.py to generate the depth maps.
  5. Prepare the dataset folder:
    • export an environment variable to the root directory of all datasets:export IFN_DIR_DATASET=/path/to/folder/Dataset
    • Place thejson files in yourcityscapes folder. Please take care, that the folder is spelled exactly as given here:"cityscapes".
    • Place thejson files in yourkitti folder. Please take care, that the folder is spelled exactly as given here:"kitti".
    • Place thejson files in yourkitti_zhou_split folder. Please take care, that the folder is spelled exactly as given here:"kitti_zhou_split".
    • Place thejson files in yourkitti_kitti_split folder. Please take care, that the folder is spelled exactly as given here:"kitti_kitti_split".
    • Place thejson files in yourkitti_2015 folder containing the KITTI 2015 Stereo dataset. Please take care, that the folder is spelled exactly as given here:"kitti_2015".

For further information please also refer to our dataloader: Dataloader Repository

Inference

The inference script is working independently. It just imports the model and the arguments.It inferences all images in a given directory and outputs them to defined directory.

python3 inference.py \                           --model-path sgdepth_eccv_test/zhou_full/epoch_20/model.pth \                    --inference-resize-height 192 \                    --inference-resize-width 640 \                    --image-path /path/to/input/dir \                    --output-path /path/to/output/dir

You can also define output with--output-format .png or.jpg.

Depth Evaluation

For evaluation of the predicted depth useeval_depth.py.Specify which model to use with the--model-name and the--model-load flag. The path is relative from the exported checkpoint directory.An example is shown below:

python3 eval_depth.py\        --sys-best-effort-determinism \        --model-name "eval_kitti_depth" \        --model-load sgdepth_eccv_test/zhou_full/checkpoints/epoch_20 \        --depth-validation-loaders "kitti_zhou_test"

Additionally an example script is shown ineval_depth.sh

Segmentation Evaluation

For the evaluation of the segmentation results on Cityscapes use theeval_segmentation.py

python3 eval_segmentation.py \        --sys-best-effort-determinism \        --model-name "eval_kitti_seg" \        --model-load sgdepth_eccv_test/zhou_full/checkpoints/epoch_20 \        --segmentation-validation-loaders "cityscapes_validation" \         --segmentation-validation-resize-width 1024         --segmentation-validation-resize-height 512 \        --eval-num-images 1

Additionally an example script is shown ineval_segmentation.sh

Training

To train the model usetrain.py:

python3 train.py \        --model-name zhou_full \        --depth-training-loaders "kitti_zhou_train" \        --train-batches-per-epoch 7293 \        --masking-enable \        --masking-from-epoch 15 \        --masking-linear-increase

If you have any questions feel free to contact us!

License

This code is licensed under theMIT-License feel free to use it within the boundaries of this license.

About

[ECCV 2020] Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp