- Notifications
You must be signed in to change notification settings - Fork24
Implementation of Visual Feature Attribution using Wasserstein GANs (VAGANs,https://arxiv.org/abs/1711.08998) in PyTorch
License
orobix/Visual-Feature-Attribution-Using-Wasserstein-GANs-Pytorch
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
This code aims to reproduce results obtained in the paper"Visual Feature Attribution using Wasserstein GANs" (official repo, TensorFlow code)
This repository contains the code to reproduce results for the paper cited above, where the authors presents a novel feature attribution technique based on Wasserstein Generative Adversarial Networks (WGAN). The code works for both synthetic (2D) and real 3D neuroimaging data, you can check below for a brief description of the two datasets.
Here is an example of what the generator/mapper network should produce: ctrl-click on the below image to open the gifv in a new tab (one frame every 50 iterations, left: input, right: anomaly map for synthetic data at iteration 50 * (its + 1)).
"Data:In order to quantitatively evaluate the performanceof the examined visual attribution methods, we generateda synthetic dataset of 10000 112x112 images with twoclasses, which model a healthy control group (label 0) and apatient group (label 1). The images were split evenly acrossthe two categories. We closely followed the synthetic datageneration process described in [31][SubCMap: Subject and Condition Specific Effect Maps]where disease effects were studied in smaller cohorts of registered images.The control group (label 0) contained images with ran-dom iid Gaussian noise convolved with a Gaussian blurringfilter. Examples are shown in Fig. 3. The patient images(label 1) also contained the noise, but additionally exhib-ited one of two disease effects which was generated from aground-truth effect map: a square in the centre and a squarein the lower right (subtype A), or a square in the centre and asquare in the upper left (subtype B). Importantly, both dis-ease subtypes shared the same label. The location of theoff-centre squares was randomly offset in each direction bya maximum of 5 pixels. This moving effect was added tomake the problem harder, but had no notable effect on theoutcome."
Currently we only implemented training on synthetic dataset, we will work on implement training on ADNI dataset asap (but pull requests are welcome as always), we put below ADNI dataset details for sake of completeness.
"We selected 5778 3D T1-weighted MR images from1288 subjects with either an MCI (label 0) or AD (label 1) diagnosis from the ADNI cohort. 2839 of the imageswere acquired using a 1.5T magnet, the remainder using a3T magnet. The subjects are scanned at regular intervals aspart of the ADNI study and a number of subjects convertedfrom MCI to AD over the years. We did not use these cor-respondences for training, however, we took advantage of itfor evaluation as will be described later.All images were processed using standard operationsavailable in the FSL toolbox [52][Advances in functional and structural MRimage analysis and implementation as FSL.] in order to reorient andrigidly register the images to MNI space, crop them andcorrect for field inhomogeneities. We then skull-strippedthe images using the ROBEX algorithm [24][Robust brain extraction across datasets and comparison withpublicly available methods]. Lastly, weresampled all images to a resolution of 1.3 mm 3 and nor-malised them to a range from -1 to 1. The final volumeshad a size of 128x160x112 voxels."
"Data used in preparation of this article were obtained fromthe Alzheimers disease Neuroimaging Initiative (ADNI) database(adni.loni.usc.edu).As such, the investigators within the ADNIcontributed to the design and implementation of ADNI and/or provided data butdid not participate in analysis or writing of thisreport. A complete listing of ADNI investigators can be found at:http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf"
To train the WGAN on this task, cd into this repo'ssrc root folder and execute:
$ python train.pyThis script takes the following command line options:
dataset_root: the root directory where tha dataset is stored, default to'../dataset'experiment: directory in where samples and models will be saved, default to'../samples'batch_size: input batch size, default to32image_size: the height / width of the input image to network, default to112channels_number: input image channels, default to1num_filters_g: number of filters for the first layer of the generator, default to16num_filters_d: number of filters for the first layer of the discriminator, default to16nepochs: number of epochs to train for, default to1000d_iters: number of discriminator iterations per each generator iter, default to5learning_rate_g: learning rate for generator, default to1e-3learning_rate_d: learning rate for discriminator, default to1e-3beta1: beta1 for adam. default to0.0cuda: enables cuda (store True)manual_seed: input for the manual seeds initializations, default to7
Running the command without arguments will train the models with the default hyperparamters values (producing results shown above).
We ported all models found in the original repository in PyTorch, you can find all implemented models here:https://github.com/orobix/Visual-Feature-Attribution-Using-Wasserstein-GANs-Pytorch/tree/master/src/models
vagan-code: Reposiory for the reference paper from its authors
ganhacks: Starter from "How to Train a GAN?" at NIPS2016
WassersteinGAN: Code accompanying the paper "Wasserstein GAN"
wgan-gp: Pytorch implementation of Paper "Improved Training of Wasserstein GANs".
c3d-pytorch: Model used as discriminator in the reference paper
Pytorch-UNet: Model used as genertator in this repository
dcgan: Model used as discriminator in this repository
cite the paper as follows (copied-pasted it from arxiv for you):
@article{DBLP:journals/corr/abs-1711-08998, author = {Christian F. Baumgartner and Lisa M. Koch and Kerem Can Tezcan and Jia Xi Ang and Ender Konukoglu}, title = {Visual Feature Attribution using Wasserstein GANs}, journal = {CoRR}, volume = {abs/1711.08998}, year = {2017}, url = {http://arxiv.org/abs/1711.08998}, archivePrefix = {arXiv}, eprint = {1711.08998}, timestamp = {Sun, 03 Dec 2017 12:38:15 +0100}, biburl = {http://dblp.org/rec/bib/journals/corr/abs-1711-08998}, bibsource = {dblp computer science bibliography, http://dblp.org}}This project is licensed under the MIT License
Copyright (c) 2018 Daniele E. Ciriello, Orobix Srl (www.orobix.com).
About
Implementation of Visual Feature Attribution using Wasserstein GANs (VAGANs,https://arxiv.org/abs/1711.08998) in PyTorch
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors4
Uh oh!
There was an error while loading.Please reload this page.

