Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Deep learning for fluorescence lifetime predictions

License

NotificationsYou must be signed in to change notification settings

SofiaKapsiani/FLIMngo

Repository files navigation

We presentFLIMngo, a novel network for predicting fluorescence lifetimes from raw TCSPC-FLIM data.Our model is based on the YOLOv5 architecture, which has been adapted for pixel-wise regression tasks.

yolo_git

Deep learning for fluorescence lifetime predictions enables high-throughputin vivo imaging
Sofia Kapsiani, Nino F Läubli, Edward N. Ward, Ana Fernandez-Villegas, Bismoy Mazumder, Clemens F. Kaminski, Gabriele S. Kaminski Schierle
Molecular Neuroscience Group andLaser Analytics Group (University of Cambridge)

[Manuscript(bioRxiv)] [Supplementary Information] [Citation]

Usage

git clone https://github.com/SofiaKapsiani/FLIMngo.gitcd FLIMngo# Create and activate a Conda environmentconda create --name flimngo_env python=3.9 -yconda activate flimngo_env# Install dependenciespip install -r requirements.txt

Predictions can be made using thepretrained model file,flimngo_pretrained_v13102024.pth.

Parameters

  • Bin Width (ns):bin_width of time channels in nanoseconds for the raw data.
  • X, Y Dimensions: Input data must have equalx andy dimensions (e.g.,256 × 256).
  • Time Dimensions: The model currently only accepts raw data with256 time dimensions.
    • For data that do not match this requirement, refer topredict_diff_time_dimensions.ipynb indemo_notebooks for a method to artificially expand/compress time dimensions.

Preprocessing

  • Normalisation: Time dimensions should be normalised to a range between0 and1.
    • See preprocessing steps indemo_notebooks.
  • Background Masking: The background should be masked using either:
    • Intensity thresholding
    • Manual intensity masks (refer topredict_celegans_dynamic.ipynb indemo_notebooks for details).

Please note the model has been optimised for data collected withIRFs ranging from100-400 ps.

Demo

FLIMngo maintains high prediction accuracy even for FLIM data with fluorescence decay curves containing as few as 10 photon counts.

test_git

Notebooks

  • predict_simulated.ipynb: Evaluates performance on synthetic FLIM data with varying photon counts per pixel.
  • predict_reduced_photon_counts.ipynb: Demonstrates performance on images from different experiments with at least100 photon counts per pixel, as well as the same images with artificially reduced photon counts (10–100 photons per pixel).
  • predict_diff_time_dimensions.ipynb: Example of predicting lifetimes from input data that do not have256 time dimensions, with a method for time dimension adjustment.
  • predict_celegans_dynamic.ipynb: Predicting lifetimes from dynamic, non-anesthetisedC. elegans.

Data simualtion

methodology_v2_git

The fluorescence intensity images shown in(a) are taken from theHuman Protein Atlas (HPA) dataset (Kaggle HPA Single-Cell Image Classification).

HPA images consist ofRGBY color channels, representing:

  • R (Red) – Microtubules
  • G (Green) – Protein
  • B (Blue) – Nucleus
  • Y (Yellow) – Endoplasmic Reticulum (ER)

Execution Order

To generate simulated FLIM data, run the notebooks found indata_simulation directory in the following order:

  1. notebook1_cropp_imgs.ipynb

    • Applies a sliding window approach to extract256×256 pixel sub-images (x, y) from the HPA fluorescence intensity images, as shown in(a).
  2. notebook2_irf_simulation.ipynb

    • Generates a dataset containing bothexperimentally acquired andsimulated instrument response functions (IRFs).
  3. notebook3_lifetime_simulation.ipynb

    • Simulates3D FLIM data by assigning a fluorescence lifetime range to each HPA color channel, as illustrated in(b).
    • Perlin noise (example in(c)) is used to determine the fractional contribution of the first color channel to each pixel.
    • For eachpixel, fluorescence decay curves are simulated using the following equation:

    $$y(t) = IRF \otimes \sum_{i=1}^{n} \left( a_i e^{-t/\tau_i} \right) + \text{noise} \tag{1}$$

    where:

  • $$IRF$$ represents the instrument response function.
  • $$n$$ is the number of lifetime components (i.e., the number of color channels contributing to the pixel).
  • $$a_i$$ and$$\tau_i$$ are the fractional contribution and fluorescence lifetime of each color channel at a given pixel, respectively.
  • $$\text{noise}$$ accounts for the Poisson noise typically encountered in TCSPC systems.
  • $$\otimes$$ denotes the convolution between the decay curve and the IRF.

For further details, please refer to theMaterials and Methods section of ourmanuscript.

Citation

If you foundFLIMngo helpful, please consider citing our work! 😊

@article {Kapsiani2025.02.20.639036,author = {Kapsiani, Sofia and L{\"a}ubli, Nino F and Ward, Edward N and Fernandez-Villegas, Ana and Mazumder, Bismoy and Kaminski, Clemens F and Kaminski Schierle, Gabriele S},title = {Deep learning for fluorescence lifetime predictions enables high-throughput in vivo imaging},elocation-id = {2025.02.20.639036},year = {2025},doi = {10.1101/2025.02.20.639036},URL = {https://www.biorxiv.org/content/early/2025/02/26/2025.02.20.639036},eprint = {https://www.biorxiv.org/content/early/2025/02/26/2025.02.20.639036.full.pdf},journal = {bioRxiv}}

[8]ページ先頭

©2009-2025 Movatter.jp