Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Attention-Guided Version of 2D UNet for Automatic Brain Tumor Segmentation

NotificationsYou must be signed in to change notification settings

Mehrdad-Noori/Brain-Tumor-Segmentation

Repository files navigation

The source code for our paper "Attention-Guided Version of 2D UNet for Automatic Brain Tumor Segmentation"

Our paper can be found atthis link.

Overview

Dataset

TheBraTS data set is used for training and evaluating the model. This dataset contains four modalities for each individual brain, namely, T1, T1c (post-contrast T1), T2, and Flair which were skull-stripped, resampled and coregistered. For more information, please refer to the main site.

Pre-processing

For pre-processing the data, firstly,N4ITK algorithm is adopted on each MRI modalities to correct the inhomogeneity of these images. Secondly, 1% of the top and bottom intensities is removed, and then each modality is normalized to zero mean and unit variance.

Architecture


image


The network is based on U-Net architecture with some modifications as follows:

  • The minor modifications: adding Residual Units, strided convolution, PReLU activation and Batch Normalization layers to the original U-Net
  • The attention mechanism: employingSqueeze and Excitation Block (SE) on concatenated multi-level features. This technique prevents confusion for the model by weighting each of the channels adaptively (please refer toour paper for more information).


Training Process

Since our proposed network is a 2D architecture, we need to extract 2D slices from 3D volumes of MRI images. To benefit from 3D contextual information of input images, we extract 2D slices from both Axial and Coronal views, and then train a network for each view separately. In the test time, we build the 3D output volume for each model by concatenating the 2D predicted maps. Finally, we fuse the two views by pixel-wise averaging.



Results

The results are obtained from theBraTS online evaluation platform using the BraTS 2018 validation set.



image


Dependencies

Usage

1- Download the BRATS 2019, 2018 or 2017 data by following the steps described inBraTS

2- Perform N4ITK bias correction usingANTs, follow the steps inthis repo (this step is optional)

3- Set the path to all brain volumes inconfig.py (ex:cfg['data_dir'] = './BRATS19/MICCAI_BraTS_2019_Data_Training/*/*/')

4- To read, preprocess and save all brain volumes into a single table file:

python prepare_data.py

5- To Run the training:

python train.py

The model can be trained fromaxial,saggital orcoronal views (setcfg['view'] in theconfig.py). Moreover, K-fold cross-validation can be used (setcfg['k_fold'] in theconfig.py)

6- To predict and save label maps:

python predict.py

The predictions will be written in .nii.gz format and can be uploaded toBraTS online evaluation platform.

Citation

@inproceedings{noori2019attention,  title={Attention-Guided Version of 2D UNet for Automatic Brain Tumor Segmentation},  author={Noori, Mehrdad and Bahri, Ali and Mohammadi, Karim},  booktitle={2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)},  pages={269--275},  year={2019},  organization={IEEE}}

About

Attention-Guided Version of 2D UNet for Automatic Brain Tumor Segmentation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp