Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Depth-aware Domain Adaptation in Semantic Segmentation

License

NotificationsYou must be signed in to change notification settings

valeoai/DADA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Updates

  • 02/2020: Using CycleGAN translated images, The DADA model achieves (43.1%) on SYNTHIA-2-Cityscapes

Paper

DADA: Depth-aware Domain Adaptation in Semantic Segmentation
Tuan-Hung Vu,Himalaya Jain,Maxime Bucher,Matthieu Cord,Patrick Pérez
valeo.ai, France
IEEE International Conference on Computer Vision (ICCV), 2019

If you find this code useful for your research, please cite ourpaper:

@inproceedings{vu2019dada,  title={DADA: Depth-aware Domain Adaptation in Semantic Segmentation},  author={Vu, Tuan-Hung and Jain, Himalaya and Bucher, Maxime and Cord, Mathieu and P{\'e}rez, Patrick},  booktitle={ICCV},  year={2019}}

Abstract

Unsupervised domain adaptation (UDA) is important for applications where large scale annotation of representative data is challenging. For semantic segmentation in particular, it helps deploy on real "target domain" data models that are trained on annotated images from a different "source domain", notably a virtual environment. To this end, most previous works consider semantic segmentation as the only mode of supervision for source domain data, while ignoring other, possibly available, information like depth. In this work, we aim at exploiting at best such a privileged information while training the UDA model. We propose a unified depth-aware UDA framework that leverages in several complementary ways the knowledge of dense depth in the source domain. As a result, the performance of the trained semantic segmentation model on the target domain is boosted. Our novel approach indeed achieves state-of-the-art performance on different challenging synthetic-2-real benchmarks.

Preparation

Pre-requisites

  • Python 3.7
  • Pytorch >= 1.2.0
  • CUDA 10.0 or higher
  • The latest version of theADVENT code.

Installation

  1. Install OpenCV if you don't already have it:
$ conda install -c menpo opencv
  1. Install ADVENT, the latest version:
$ git clone https://github.com/valeoai/ADVENT.git$ pip install -e ./ADVENT
  1. Clone and install the repo:
$ git clone https://github.com/valeoai/DADA$ pip install -e ./DADA

With this (the-e option of pip), you can edit the DADA code on the fly and import functionand classes of DADA in other project as well.

  1. Optional. To uninstall this package, run:
$ pip uninstall DADA

You can take a look at theDockerfile if you are uncertain about steps to install this project.

Datasets

By default, the datasets are put inDADA/data. We use symlinks to hook the DADA codebase to the datasets. An alternative option is to explicitlly specify the parametersDATA_DIRECTORY_SOURCE andDATA_DIRECTORY_TARGET in YML configuration files.

  • SYNTHIA: Please first follow the instructionshere to download the images. In this work, we used theSYNTHIA-RAND-CITYSCAPES (CVPR16) split. The segmentation labels can be foundhere. The dataset directory should have this basic structure:

    DADA/data/SYNTHIA                           % SYNTHIA dataset root├── RGB├── parsed_LABELS└── Depth
  • Cityscapes: Please follow the instructions inCityscape to download the images and validation ground-truths. The Cityscapes dataset directory should have this basic structure:

    DADA/data/Cityscapes                       % Cityscapes dataset root├── leftImg8bit│   ├── train│   └── val└── gtFine    └── val
  • Mapillary: Please follow the instructions inMapillary to download the images and validation ground-truths. The Mapillary dataset directory should have this basic structure:

    DADA/data/mapillary                        % Mapillary dataset root├── train│   └── images└── validation    ├── images    └── labels

Pre-trained models

Pre-trained models can be downloadedhere and put inDADA/pretrained_models

Running the code

For evaluating pretrained networks, execute:

$cd DADA/dada/scripts$ python test.py --cfg ./<configs_dir>/dada_pretrained.yml$ python test.py --cfg ./<configs_dir>/dada_cyclegan_pretrained.yml

<configs_dir> could be set asconfigs_s2c (SYNTHIA2Cityscapes) orconfigs_s2m (SYNTHIA2Mapillary)

Training

For the experiments done in the paper, we used pytorch 1.2.0 and CUDA 10.0. To ensure reproduction, the random seed has been fixed in the code. Still, you may need to train a few times or to train longer (by changingMAX_ITERS andEARLY_STOP) to reach the comparable performance.

By default, logs and snapshots are stored inDADA/experiments with this structure:

DADA/experiments├── logs└── snapshots

To train DADA:

$cd DADA/dada/scripts$ python train.py --cfg ./<configs_dir>/dada.yml$ python train.py --cfg ./<configs_dir>/dada.yml --tensorboard         % using tensorboard

To train AdvEnt baseline:

$cd DADA/dada/scripts$ python train.py --cfg ./<configs_dir>/advent.yml$ python train.py --cfg ./<configs_dir>/advent.yml --tensorboard         % using tensorboard

Testing

To test DADA:

$cd DADA/dada/scripts$ python test.py --cfg ./<configs_dir>/dada.yml

Acknowledgements

This codebase heavily depends onAdvEnt.

License

DADA is released under theApache 2.0 license.


[8]ページ先頭

©2009-2025 Movatter.jp