Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

🧪Yet Another ICU Benchmark: a holistic framework for the standardization of clinical prediction model experiments. Provide custom datasets, cohorts, prediction tasks, endpoints, preprocessing, and models. Paper:https://arxiv.org/abs/2306.05109

License

NotificationsYou must be signed in to change notification settings

rvandewater/YAIB

Repository files navigation

YAIB logo

🧪 Yet Another ICU Benchmark

CIBlackPlatformarXivPyPI version shields.iopythonpytorchlightningLicense

Yet another ICU benchmark (YAIB) provides a framework for doing clinical machine learning experiments on Intensive Care Unit(ICU) EHR data.

We support the following datasets out of the box:

DatasetMIMIC-III /IVeICU-CRDHiRIDAUMCdb
Admissions40k / 73k200k33k23k
Versionv1.4 / v2.2v2.0v1.1.1v1.0.2
Frequency (time-series)1 hour5 minutes2 / 5 minutesup to 1 minute
Originally published2015 / 2020201720202019
OriginUSAUSASwitzerlandNetherlands

New datasets can also be added. We are currently working on a package to make this process as smooth as possible.The benchmark is designed for operating on preprocessed parquet files.

We provide five common tasks for clinical prediction by default:

NoTaskFrequencyType
1ICU MortalityOnce per Stay (after 24H)Binary Classification
2Acute Kidney Injury (AKI)Hourly (within 6H)Binary Classification
3SepsisHourly (within 6H)Binary Classification
4Kidney Function(KF)Once per stayRegression
5Length of Stay (LoS)Hourly (within 7D)Regression

New tasks can be easily added.To get started right away, we include the eICU and MIMIC-III demo datasets in our repository.

The following repositories may be relevant as well:

For all YAIB-related repositories, please see:https://github.com/stars/rvandewater/lists/yaib.

📄Paper

To reproduce the benchmarks in our paper, we refer to theML reproducibility document.If you use this code in your research, please cite the following publication:

@inproceedings{vandewaterYetAnotherICUBenchmark2024,  title = {Yet Another ICU Benchmark: A Flexible Multi-Center Framework for Clinical ML},  shorttitle = {Yet Another ICU Benchmark},  booktitle = {The Twelfth International Conference on Learning Representations},  author = {van de Water, Robin and Schmidt, Hendrik Nils Aurel and Elbers, Paul and Thoral, Patrick and Arnrich, Bert and Rockenschaub, Patrick},  year = {2024},  month = oct,  urldate = {2024-02-19},  langid = {english},}

This paper can also be found on arxiv2306.05109

💿Installation

YAIB is currently ideally installed from source, however we also offer it an early PyPi release.

Installation from source

First, we clone this repository using git:

git clone https://github.com/rvandewater/YAIB.git

Please note the branch. The newest features and fixes are available at the development branch:

git checkout development

YAIB can be installed using a conda environment (preferred) or pip. Below are the three CLI commands to install YAIBusingconda.

The first command will install an environment based on Python 3.10.

conda env update -f environment.yml

Useenvironment.yml on x86 hardware. Please note that this installs Pytorch as well.

For mps, one needs to comment outpytorch-cuda, see thePyTorch install guide.

We then activate the environment and install a package calledicu-benchmarks, after which YAIB should be operational.

conda activate yaibpip install -e .

After installation, please check if your Pytorch version works with CUDA (in case available) to ensure the best performance.YAIB will automatically list available processors at initialization in its log files.

👩‍💻Usage

Please refer toour wiki for detailed information on how to use YAIB.

Quickstart 🚀 (demo data)

The authors of MIMIC-III and eICU have made a small demo dataset available to demonstrate their use. They can be found on Physionet:MIMIC-III Clinical Database Demo andeICU Collaborative Research Database Demo. These datasets are published under theOpen Data Commons Open Database License v1.0 and can be used without credentialing procedure. We have created demo cohorts processedsolely from these datasets for each of our currently supported task endpoints. To the best of our knowledge, this complies with the license and the respective dataset author's instructions. Usage of the task cohorts and the dataset is only permitted with the above license.Westrongly recommend completing a human subject research training to ensure you properly handle human subject research data.

In the folderdemo_data we provide processed publicly available demo datasets from eICU and MIMIC with the necessary labelsforMortality at 24h,Sepsis,Akute Kidney Injury,Kidney Function, andLength of Stay.

If you do not yet have access to the ICU datasets, you can run the following command to train models for the included democohorts:

wandb sweep --verbose experiments/demo_benchmark_classification.ymlwandb sweep --verbose experiments/demo_benchmark_regression.yml
wandb agent <sweep_id>

Tip: You can choose to run each of the configurations on a SLURM cluster instance bywandb agent --count 1 <sweep_id>

Note: You will need to have a wandb account and be logged in to run the above commands.

Getting the datasets

HiRID, eICU, and MIMIC IV can be accessed throughPhysioNet. A guide to this process can befoundhere.AUMCdb can be accessed through a separate accessprocedure. We do not haveinvolvement in the access procedure and can not answer to any requests for data access.

Cohort creation

Since the datasets were created independently of each other, they do not share the same data structure or data identifiers. Inorder to make them interoperable, use the preprocessing utilitiesprovided by thericu package.Ricu pre-defines a large number of clinical concepts and how to load them from a given dataset, providing a common interface tothe data, that is used in thisbenchmark. Please refer to ourcohort definition code for generating the cohortsusing our python interface for ricu.After this, you can run the benchmark once you have gained access to the datasets.

👟 Running YAIB

Preprocessing and Training

The following command will run training and evaluation on the MIMIC demo dataset for (Binary) mortality prediction at 24h withtheLGBMClassifier. Child samples are reduced due to the small amount of training data. We load available cache and, if available,loadexisting cache files.

icu-benchmarks \    -d demo_data/mortality24/mimic_demo \    -n mimic_demo \    -t BinaryClassification \    -tn Mortality24 \    -m LGBMClassifier \    -hp LGBMClassifier.min_child_samples=10 \    --generate_cache \    --load_cache \    --seed 2222 \    -l ../yaib_logs/ \    --tune

For a list of available flags, runicu-benchmarks train -h.

Run withPYTORCH_ENABLE_MPS_FALLBACK=1 on Macs with Metal Performance Shaders.

For Windows based systems, the next line character (\) needs to be replaced by (^) (Command Prompt) or (`) (Powershell)respectively.

Alternatively, the easiest method to train all the models in the paper is to run these commands from the directory root:

wandb sweep --verbose experiments/benchmark_classification.ymlwandb sweep --verbose experiments/benchmark_regression.yml

This will create two hyperparameter sweeps for WandB for the classification and regression tasks.This configuration will train all the models in the paper. You can then run the following command to train the models:

wandb agent <sweep_id>

Tip: You can choose to run each of the configurations on a SLURM cluster instance bywandb agent --count 1 <sweep_id>

Note: You will need to have a wandb account and be logged in to run the above commands.

Evaluate or Finetune

It is possible to evaluate a model trained on another dataset and no additional training is done.In this case, the source dataset is the demo data from MIMIC and the target is the eICU demo:

icu-benchmarks \    --eval \    -d demo_data/mortality24/eicu_demo \    -n eicu_demo \    -t BinaryClassification \    -tn Mortality24 \    -m LGBMClassifier \    --generate_cache \    --load_cache \    -s 2222 \    -l ../yaib_logs \    -sn mimic \    --source-dir ../yaib_logs/mimic_demo/Mortality24/LGBMClassifier/2022-12-12T15-24-46/repetition_0/fold_0

A similar syntax is used for finetuning, where a model is loaded and then retrained. To run finetuning, replace--eval with-ft.

Models

We provide several existing machine learning models that are commonly used for multivariate time-series data.pytorch is used for the deep learning models,lightgbm for the boosted tree approaches, andsklearn for other classicalmachine learning models.The benchmark provides (among others) the following built-in models:

🛠️ Development

To adapt YAIB to your own use case, you can usethedevelopment information page as a reference.We appreciate contributions to the project. Please read thecontribution guidelines before submitting a pullrequest.

Acknowledgements

This project has been developed partially under the funding of “Gemeinsamer Bundesausschuss (G-BA) Innovationsausschuss” in the framework of “CASSANDRA - Clinical ASSist AND aleRt Algorithms”.(project number 01VSF20015). We would like to acknowledge the work of Alisher Turubayev, Anna Shopova, Fabian Lange, Mahmut Kamalak, Paul Mattes, and Victoria Ayvasky for adding Pytorch Lightning, Weights and Biases compatibility, and several optional imputation methods to a later version of the benchmark repository.

We do not own any of the datasets used in this benchmark. This project uses heavily adapted components oftheHiRID benchmark. We thank the authors for providing this codebase andencourage further development to benefit the scientific community. The demo datasets have been released underanOpen Data Commons Open Database License (ODbL).

License

This source code is released under the MIT license, includedhere. We do not own any of the datasets used orincluded in this repository.

About

🧪Yet Another ICU Benchmark: a holistic framework for the standardization of clinical prediction model experiments. Provide custom datasets, cohorts, prediction tasks, endpoints, preprocessing, and models. Paper:https://arxiv.org/abs/2306.05109

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp