Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

💥 Command line tool for automatic liver parenchyma and liver vessel segmentation in CT using a pretrained deep learning model

License

NotificationsYou must be signed in to change notification settings

andreped/livermask

Repository files navigation

titlecolorFromcolorTosdkapp_portemojipinnedlicenseapp_file
livermask: Automatic Liver Parenchyma and vessel segmentation in CT
indigo
indigo
docker
7860
🔎
false
mit
demo/app.py

Automatic liver parenchyma and vessel segmentation in CT using deep learning

licenseBuild Actions StatusDOIGitHub DownloadsPip Downloads

livermask was developed by SINTEF Medical Technology to provide an open tool to accelerate research.

Demo

An online version of the tool has been made openly available at Hugging Face spaces, to enable researchers to easily test the software on their own data without downloading it. To access it, click on the badge above.

Install

A stable release is available on PyPI:

pip install livermask

Alternatively, to install from source do:

pip install git+https://github.com/andreped/livermask.git

As TensorFlow 2.4 only supports Python 3.6-3.8, so does livermask. Softwareis also compatible with Anaconda. However, best way of installing livermask is usingpip, whichalso works for conda environments.

(Optional) To add GPU inference support for liver vessel segmentation (which uses Chainer and CuPy), you need to installCuPy. This can be easily done by addingcupy-cudaX, whereX is the CUDA version you have installed, for instancecupy-cuda110 for CUDA-11.0:

pip install cupy-cuda110

Program has been tested using Python 3.7 on Windows, macOS, and Ubuntu Linux 20.04.

Usage

livermask --input path-to-input --output path-to-output
commanddescription
--inputthe full path to the input data. Could be nifti file or directory (if directory is provided as input)
--outputthe full path to the output data. Could be either output name or directory (if directory is provided as input)
--cputo disable the GPU (force computations on CPU only)
--verboseto enable verbose
--vesselsto segment vessels
--extensionwhich extension to save output in (default:.nii)

Using code directly

If you wish to use the code directly (not as a CLI and without installing), you can run this command:

python -m livermask.livermask --input path-to-input --output path-to-output

DICOM/NIfTI format

Pipeline assumes input is in the NIfTI format, and output a binary volume in the same format (.nii or .nii.gz).DICOM can be converted to NIfTI using the CLIdcm2niix, as such:

dcm2niix -s y -m y -d 1 "path_to_CT_folder" "output_name"

Note that "-d 1" assumed that "path_to_CT_folder" is the folder just before the set of DICOM scans you want to import and convert. This can be removed if you want to convert multiple ones at the same time. It is possible to set "." for "output_name", which in theory should output a file with the same name as the DICOM folder, but that doesn't seem to happen...

Troubleshooting

You might have issues downloading the model when using VPN. If any issues are observed, try to disable VPN and try again.

If the program struggles to install, attempt to install using:

pip install --force-reinstall --no-deps git+https://github.com/andreped/livermask.git

If you experience issues with numpy after installing CuPy, try reinstalling CuPy with this extension:

pip install 'cupy-cuda110>=7.7.0,<8.0.0'

Applications of livermask

Segmentation performance metrics

The segmentation models were evaluated on an internal dataset against manual annotations. See Table E in S4 Appendix in the Supporting Information ofthis paper for more information. The table presented there can also be seen below:

ClassDSCHD95
Parenchyma0.946±0.04610.122±11.032
Vessels0.355±0.09024.872±5.161

The parenchyma segmentation model was trained on the LITS dataset, whereas the vessel model was trained on a local dataset (Oslo-CoMet). The LITS dataset is openly accessible and can be downloaded fromhere.

The Oslo-CoMet included 60 patients, of which 11 representative patients were used as hold out sample for the performance metrics assessment.

Acknowledgements

If you found this tool helpful in your research, please, consider citing it (seehere for more information on how to cite):

@software{andre_pedersen_2023_7574587,  author       = {André Pedersen and Javier Pérez de Frutos},  title        = {andreped/livermask: v1.4.1},  month        = jan,  year         = 2023,  publisher    = {Zenodo},  version      = {v1.4.1},  doi          = {10.5281/zenodo.7574587},  url          = {https://doi.org/10.5281/zenodo.7574587}}

In addition, the segmentation performance of the tool was presented inthis paper, thus, cite this tool as well if that is of relevance for you study:

@article{perezdefrutos2022ddmr,    title = {Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation},    author = {Pérez de Frutos, Javier AND Pedersen, André AND Pelanis, Egidijus AND Bouget, David AND Survarachakan, Shanmugapriya AND Langø, Thomas AND Elle, Ole-Jakob AND Lindseth, Frank},    journal = {PLOS ONE},    publisher = {Public Library of Science},    year = {2023},    month = {02},    volume = {18},    doi = {10.1371/journal.pone.0282110},    url = {https://doi.org/10.1371/journal.pone.0282110},    pages = {1-14},    number = {2}}

[8]ページ先頭

©2009-2025 Movatter.jp