- Notifications
You must be signed in to change notification settings - Fork0
CompSegNet: An enhanced U-shaped architecture for nuclei segmentation in H&E histopathology images
License
mltraore/CompSegNet
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
- Enhances MBConv efficiency through the Zoom-Filter-Rescale (ZFR) strategy.
- Involves:
- Large stride at the entry expansion layer (Zooming).
- Large kernel-sized depthwise convolution operation (Zooming and Filtering).
- Rescaling spatial size using bilinear upsampling to compensate for downsampling (Rescaling).
- Increases receptive fields within MBConv by 8×.
- Decreases inference latency by approximately 30%.
- Extends the CSeg block by incorporating an improved global context (IGC) block for robust global context modeling.
- Increases AJI score of the nuclei segmentation on MoNuSeg 2018 dataset in a lightweight network by approximately 1.9%.
- Achieves improved performance with only a minimal computational cost.
- Incorporates Transformer encoder blocks in a tailored manner to a variant of the Sandglass block.
- Leverages strengths of both convolutional and attention-based mechanisms.
- Effectively captures fine-grained spatial information and comprehensive long-range dependencies.
- Stem block
- Enhances low-level feature extraction while mitigating the impact of noisy features in the model's initial skip connections.
For cloning the project run:
$ git clone https://github.com/mltraore/compsegnet.git $cd compsegnet
Install requirements using:
$ chmod +x install_dependencies.sh $ pip install -r requirements.txt $ sudo ./install_dependencies.sh
The following datasets were used in experiments:
Processed versions of these datasets, along with the original versions, predicted masks, and a checkpoint sample for the MoNuSeg 2018 dataset, can be downloadedhere. For automatic download, run:
$ chmod +x download_resources.sh $ ./download_resources.sh
in the project directory.
- Note: The included predicted masks are actual CompSegNet architecture outputs, provided for qualitative comparison and further evaluation by the authors in their research.
To create the datasets yourself, use theprepare_dataset.py script:
# Get script usage info$ python3 prepare_dataset.py --help# Example usage$ python3 prepare_dataset.py --dataset-path datasets/monuseg_2018 \ --image-size 1000 \ --validation-size 0.10 \ --reference-image datasets/monuseg_2018/train/images/1.tif \ --prepared-data-path datasets/prepared_datasets/monuseg_2018
Use thetrain.py script in the project directory to train the model:
# Get script usage info$ python3 train.py --help# Example usage$ python3 train.py --train-folder datasets/prepared_datasets/monuseg_2018/train \ --validation-folder datasets/prepared_datasets/monuseg_2018/validation \ --checkpoints-folder checkpoints/ckpts
Use thetest.py script in the project directory to test the model:
# Get script usage info$ python3 test.py --help# Example usage$ python3 test.py --model-weights-save checkpoints/ckpts \ --test-set datasets/prepared_datasets/monuseg_2018/train
This provides Dice Similarity Coefficient (DSC), Aggregated Jaccard Index (AJI), and Hausdorff Distance (HD) scores.
CompSegNet on MoNuSeg 2018 dataset
CompSegNet on MoNuSeg 2018 dataset
@article{traore2024compsegnet, title={CompSegNet: An enhanced U-shaped architecture for nuclei segmentation in H\&E histopathology images}, author={Traoré, Mohamed and Hancer, Emrah and Samet, Refik and Yildirim, Zeynep and Nemati, Nooshin}, journal={Biomedical Signal Processing and Control}, volume={97}, pages={106699}, year={2024}, publisher={Elsevier}}
This work is supported by TÜBİTAK (Scientific and Technological Research Council of Türkiye) (Project number: 121E379).
About
CompSegNet: An enhanced U-shaped architecture for nuclei segmentation in H&E histopathology images
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.