- Article
- Published:
Segmentation of neurons from fluorescence calcium recordings beyond real time
- Yijun Bao ORCID:orcid.org/0000-0001-9641-99461,
- Somayyeh Soltanian-Zadeh ORCID:orcid.org/0000-0003-2726-85011,
- Sina Farsiu ORCID:orcid.org/0000-0003-4872-29021,2 &
- …
- Yiyang Gong ORCID:orcid.org/0000-0002-1855-930X1,3
Nature Machine Intelligencevolume 3, pages590–600 (2021)Cite this article
2951Accesses
29Citations
54Altmetric
Abstract
Fluorescent genetically encoded calcium indicators and two-photon microscopy help understand brain function by generating large-scale in vivo recordings in multiple animal models. Automatic, fast and accurate active neuron segmentation is critical when processing these videos. Here we developed and characterized a novel method, Shallow U-Net Neuron Segmentation (SUNS), to quickly and accurately segment active neurons from two-photon fluorescence imaging videos. We used temporal filtering and whitening schemes to extract temporal features associated with active neurons, and used a compact shallow U-Net to extract spatial features of neurons. Our method was both more accurate and an order of magnitude faster than state-of-the-art techniques when processing multiple datasets acquired by independent experimental groups; the difference in accuracy was enlarged when processing datasets containing few manually marked ground truths. We also developed an online version, potentially enabling real-time feedback neuroscience experiments.
This is a preview of subscription content,access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
9,800 Yen / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
¥14,900 per year
only ¥1,242 per issue
Prices may be subject to local taxes which are calculated during checkout




Similar content being viewed by others
Data availability
The trained network weights, and the optimal hyperparameters can be accessed athttps://github.com/YijunBao/SUNS_paper_reproduction/tree/main/paper_reproduction/training%20results. The output masks of all neuron segmentation algorithms can be accessed athttps://github.com/YijunBao/SUNS_paper_reproduction/tree/main/paper_reproduction/output%20masks%20all%20methods. We used three public datasets to evaluate the performance of SUNS and other neuron segmentation algorithms. We used the videos of ABO dataset fromhttps://github.com/AllenInstitute/AllenSDK/wiki/Use-the-Allen-Brain-Observatory-%E2%80%93-Visual-Coding-on-AWS, and we used the corresponding manual labels created from our previous work,https://github.com/soltanianzadeh/STNeuroNet/tree/master/Markings/ABO. We used the Neurofinder dataset fromhttps://github.com/codeneuro/neurofinder, and we used the corresponding manual labels created from our previous work,https://github.com/soltanianzadeh/STNeuroNet/tree/master/Markings/Neurofinder. We used the videos and manual labels of CaImAn dataset fromhttps://zenodo.org/record/1659149. A more detailed description of how we used these dataset can be found in the readme ofhttps://github.com/YijunBao/SUNS_paper_reproduction/tree/main/paper_reproduction.
Code availability
Code for SUNS can be accessed athttps://github.com/YijunBao/Shallow-UNet-Neuron-Segmentation_SUNS51. The version to reproduce the results in this paper can be accessed athttps://github.com/YijunBao/SUNS_paper_reproduction52.
References
Akerboom, J. et al. Genetically encoded calcium indicators for multi-color neural activity imaging and combination with optogenetics.Front. Mol. Neurosci.6, 2 (2013).
Chen, T.-W. et al. Ultrasensitive fluorescent proteins for imaging neuronal activity.Nature499, 295–300 (2013).
Dana, H. et al. High-performance calcium sensors for imaging activity in neuronal populations and microcompartments.Nat. Methods16, 649–657 (2019).
Helmchen, F. & Denk, W. Deep tissue two-photon microscopy.Nat. Methods2, 932–940 (2005).
Stringer, C. et al. Spontaneous behaviors drive multidimensional, brainwide activity.Science364, eaav7893 (2019).
Grewe, B. F. et al. High-speed in vivo calcium imaging reveals neuronal network activity with near-millisecond precision.Nat. Methods7, 399–405 (2010).
Soltanian-Zadeh, S. et al. Fast and robust active neuron segmentation in two-photon calcium imaging using spatiotemporal deep learning.Proc. Natl Acad. Sci. USA116, 8554–8563 (2019).
Pnevmatikakis, E. A. Analysis pipelines for calcium imaging data.Curr. Opin. Neurobiol.55, 15–21 (2019).
Klibisz, A. et al. inDeep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (eds. Cardoso, J. et al.) 285–293 (Springer, 2017).
Gao, S. Automated neuron detection.GitHubhttps://github.com/iamshang1/Projects/tree/master/Advanced_ML/Neuron_Detection (2016).
Shen, S. P. et al. Automatic cell segmentation by adaptive thresholding (ACSAT) for large-scale calcium imaging datasets.eNeuro5, ENEURO.0056-18.2018 (2018).
Spaen, Q. et al. HNCcorr: a novel combinatorial approach for cell identification in calcium-imaging movies.eNeuro6, ENEURO.0304-18.2019 (2019).
Kirschbaum, E., Bailoni, A. & Hamprecht, F. A. DISCo for the CIA: deep learning, instance segmentation, and correlations for calcium imaging analysis. InMedical Image Computing and Computer Assisted Intervention, (eds. Martel, A. L. et al.) 151–162 (Springer, 2020)
Apthorpe, N. J. et al. Automatic neuron detection in calcium imaging data using convolutional networks.Adv. Neural Inf. Process Syst.29, 3278–3286 (2016).
Mukamel, E. A., Nimmerjahn, A. & Schnitzer, M. J. Automated analysis of cellular signals from large-scale calcium imaging data.Neuron63, 747–760 (2009).
Maruyama, R. et al. Detecting cells using non-negative matrix factorization on calcium imaging data.Neural Netw.55, 11–19 (2014).
Pnevmatikakis, EftychiosA. et al. Simultaneous denoising, deconvolution, and demixing of calcium imaging data.Neuron89, 285–299 (2016).
Pachitariu, M. et al. Suite2p: beyond 10,000 neurons with standard two-photon microscopy. Preprint atbiorXivhttps://doi.org/10.1101/061507 (2017).
Petersen, A., Simon, N. & Witten, D. SCALPEL: extracting neurons from calcium imaging data.Ann. Appl. Stat.12, 2430–2456 (2018).
Giovannucci, A. et al. CaImAn an open source tool for scalable calcium imaging data analysis.eLife8, e38173 (2019).
Sitaram, R. et al. Closed-loop brain training: the science of neurofeedback.Nat. Rev. Neurosci.18, 86–100 (2017).
Kearney, M. G. et al. Discrete evaluative and premotor circuits enable vocal learning in songbirds.Neuron104, 559–575.e6 (2019).
Carrillo-Reid, L. et al. Controlling visually guided behavior by holographic recalling of cortical ensembles.Cell178, 447–457.e5 (2019).
Rickgauer, J. P., Deisseroth, K. & Tank, D. W. Simultaneous cellular-resolution optical perturbation and imaging of place cell firing fields.Nat. Neurosci.17, 1816–1824 (2014).
Packer, A. M., Russell, L. E., Dalgleish, H. W. P. & Häusser, M. Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo.Nat. Methods12, 140–146 (2015).
Zhang, Z. et al. Closed-loop all-optical interrogation of neural circuits in vivo.Nat. Methods15, 1037–1040 (2018).
Giovannucci, A. et al. OnACID: online analysis of calcium imaging data in real time. InAdvances in Neural Information Processing Systems (eds. Guyon, I. et al.) (Curran Associates, 2017).
Wilt, B. A., James, E. F. & Mark, J. S. Photon shot noise limits on optical detection of neuronal spikes and estimation of spike timing.Biophys. J.104, 51–62 (2013).
Jiang, R. & Crookes, D. Shallow unorganized neural networks using smart neuron model for visual perception.IEEE Access.7, 152701–152714 (2019).
Ba, J. & Caruana, R. Do deep nets really need to be deep?Adv. Neural Inf. Process. Syst. (2014).
Lei, F., Liu, X., Dai, Q. & Ling, B. W.-K. Shallow convolutional neural network for image classification.SN Appl. Sci.2, 97 (2019).
Yu, S. et al. A shallow convolutional neural network for blind image sharpness assessment.PLoS One12, e0176632 (2017).
Ronneberger, O., Fischer, P. & Brox, T.U-Net: Convolutional Networks for Biomedical Image Segmentation (Springer, 2015).
Code Neurofinder (CodeNeuro, 2019);http://neurofinder.codeneuro.org/
Arac, A. et al. DeepBehavior: a deep learning toolbox for automated analysis of animal and human behavior imaging data.Front. Syst. Neurosci.13, 20 (2019).
Shen, D., Wu, G. & Suk, H.-I. Deep learning in medical image analysis.Annu. Rev. Biomed. Eng.19, 221–248 (2017).
Zhou, P. et al. Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data.eLife.7, e28728 (2018).
Meyer, F. Topographic distance and watershed lines.Signal Process.38, 113–125 (1994).
Pnevmatikakis, E. A. & Giovannucci, A. NoRMCorre: an online algorithm for piecewise rigid motion correction of calcium imaging data.J. Neurosci. Methods291, 83–94 (2017).
Keemink, S. W. et al. FISSA: a neuropil decontamination toolbox for calcium imaging signals.Sci. Rep.8, 3493 (2018).
Mitani, A. & Komiyama, T. Real-time processing of two-photon calcium imaging data including lateral motion artifact correction.Front. Neuroinform.12, 98 (2018).
Frankle, J. & Carbin, M. The lottery ticket hypothesis: finding sparse, trainable neural networks. InInternational Conference on Learning Representations (ICLR, 2019).
Yang, W. & Lihong, X. Lightweight compressed depth neural network for tomato disease diagnosis.Proc. SPIE (2020).
Oppenheim, A., Schafer, R. & Stockham, T. Nonlinear filtering of multiplied and convolved signals.IEEE Trans. Audio Electroacoust.16, 437–466 (1968).
Szymanska, A. F. et al. Accurate detection of low signal-to-noise ratio neuronal calcium transient waves using a matched filter.J. Neurosci. Methods259, 1–12 (2016).
Milletari, F., Navab, N. & Ahmadi, S. V-Net: fully convolutional neural networks for volumetric medical image segmentation. In2016 Fourth International Conference on 3D Vision 565–571 (3DV, 2016).
Lin, T.-Y. et al. Focal loss for dense object detection. InProc. IEEE International Conference on Computer Vision 2980–2988 (IEEE, 2017).
de Vries, S. E. J. et al. A large-scale standardized physiological survey reveals functional organization of the mouse visual cortex.Nat. Neurosci.23, 138–151 (2020).
Gilman, J. P., Medalla, M. & Luebke, J. I. Area-specific features of pyramidal neurons—a comparative study in mouse and rhesus monkey.Cerebral Cortex.27, 2078–2094 (2016).
Ballesteros-Yáñez, I. et al. Alterations of cortical pyramidal neurons in mice lacking high-affinity nicotinic receptors.Proc. Natl Acad. Sci. USA107, 11567–11572 (2010).
Bao, Y. YijunBao/Shallow-UNet-Neuron-Segmentation_SUNS.Zenodohttps://doi.org/10.5281/zenodo.4638171 (2021).
Bao, Y. YijunBao/SUNS_paper_reproduction.Zenodohttps://doi.org/10.5281/zenodo.4638135 (2021).
Acknowledgements
We acknowledge support from the BRAIN Initiative (NIH 1UF1-NS107678, NSF 3332147), the NIH New Innovator Program (1DP2-NS111505), the Beckman Young Investigator Program, the Sloan Fellowship and the Vallee Young Investigator Program received by Y.G. We acknowledge Z. Zhu for early characterization of the SUNS.
Author information
Authors and Affiliations
Department of Biomedical Engineering, Duke University, Durham, NC, USA
Yijun Bao, Somayyeh Soltanian-Zadeh, Sina Farsiu & Yiyang Gong
Department of Ophthalmology, Duke University Medical Center, Durham, NC, USA
Sina Farsiu
Department of Neurobiology, Duke University, Durham, NC, USA
Yiyang Gong
- Yijun Bao
You can also search for this author inPubMed Google Scholar
- Somayyeh Soltanian-Zadeh
You can also search for this author inPubMed Google Scholar
- Sina Farsiu
You can also search for this author inPubMed Google Scholar
- Yiyang Gong
You can also search for this author inPubMed Google Scholar
Contributions
Y.G. conceived and designed the project. Y.B. and Y.G. implemented the code for SUNS. Y.B. and S.S.-Z. implemented the code for other algorithms for comparison. Y.B. ran the experiment. Y.B., S.S.-Z., S.F. and Y.G. analysed the data. Y.B., S.S.-Z., S.F. and Y.G. wrote the paper.
Corresponding authors
Correspondence toYijun Bao orYiyang Gong.
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review informationNature Machine Intelligence thanks Xue Han and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data
Extended Data Fig. 1 The average calcium response formed the temporal filter kernel.
We determined the temporal matched filter kernel by averaging calcium transients within a moderate SNR range; these transients likely represent the temporal response to single action potentials2.a, Example data show all background-subtracted fluorescence calcium transients of all GT neurons in all videos in the ABO 275 μm dataset that showed peak SNR (pSNR) in the regime 6 < pSNR < 8 (gray). We minimized crosstalk from neighboring neurons by excluding transients during time periods when neighboring neurons also had transients. We normalized all transients such that their peak values were unity, and then averaged these normalized transients into an averaged spike trace (red). We used the portion of the average spike trace above e–1 (blue dashed line) as the final template kernel.b, When analyzing performance on the ABO 275 μm dataset through ten-fold leave-one-out cross-validation, using the temporal kernel determined in (a) within our temporal filter scheme achieved significantly higherF1 score than not using a temporal filter or using an unmatched filter (*P < 0.05, **P < 0.005; two-sided Wilcoxon signed-rank test,n = 10 videos) and achieved a slightly higherF1 score than using a single exponentially decaying kernel (P = 0.77; two-sided Wilcoxon signed-rank test,n = 10 videos). Error bars are s.d. The gray dots represent scores for the test data for each round of cross-validation. The unmatched filter was a moving-average filter over 60 frames.c-d, are analogous to (a-b), but for the Neurofinder dataset. We determined the filter kernel using videos 04.01 and 04.01.test.
Extended Data Fig. 2 The complexity of the CNN architecture controlled the tradeoff between speed and accuracy.
We explored multiple potential CNN architectures to optimize performance.a-d, Various CNN architectures having depths of (a) two, (b) three, (c) four, or (d) five. For the three-depth architecture, we also tested different numbers of skip connections, ReLU (Rectified Linear Unit) instead of ELU (Exponential Linear Unit) as the activation function, and separable Conv2D instead of Conv2D in the encoding path. The dense five-depth model mimicked the model used in UNet2Ds9. The legend ‘0/ni + ni’ represents whether the skip connection was used (ni + ni) or not used (0 + ni).e, TheF1 score and processing speed of SUNS using various CNN architectures when analyzing the ABO 275 μm dataset through ten-fold leave-one-out cross-validation. The right panel zooms in on the rectangular region in the left panel. Error bars are s.d. The legend (n1,n2, …,nk) describes architectures withk-depth andni channels at theith depth. We determined that the three-depth model, (4,8,16), using one skip connection at the shallowest layer, ELU, and full Conv2D (Fig.1c), had a good trade-off between speed and accuracy; we used this architecture as the SUNS architecture throughout the paper. One important drawback of the ReLU activation function was its occasional (20% of the time) failure during training, compared to negligible failure levels for the ELU activation function.
Extended Data Fig. 3 TheF1 score of SUNS was robust to moderate variation of training and post-processing parameters.
We tested if the accuracy of SUNS when analyzing the ABO 275 μm dataset within the ten-fold leave-one-out cross-validation relied on intricate tuning of the algorithm’s hyperparameters. The evaluated training parameters included (a) the threshold of the SNR video (thSNR) and (b) the training batch size. The evaluated post-processing parameters included (c) the threshold of probability map (thprob), (d) the minimum neuron area (tharea), (e) the threshold of COM distance (thCOM), and (f) the minimum number of consecutive frames (thframe). The solid blue lines are the averageF1 scores, and the shaded regions are mean ± one s.d. When evaluating the post-processing parameters in (c-f), we fixed each parameter under investigation at the given values and simultaneously optimized theF1 score over the other parameters. Variations in these hyperparameters produced only small variations in theF1 performance. The orange lines show theF1 score (solid) ± one s.d. (dashed) when we optimized all four post-processing parameters simultaneously. The similarity between theF1 scores on the blue lines and the scores on the orange lines suggest that optimizing for three or four parameters simultaneously achieved similar optimized performance. Moreover, the relatively consistentF1 scores on the blue lines suggest that our algorithm did not rely on intricate hyperparameter tuning.
Extended Data Fig. 4 The performance of SUNS was better than that of other methods in the presence of intensity noise or motion artifacts.
The (a,d) recall, (b,e) precision, and (c,f)F1 score of all the (a-c) batch and (d-f) online segmentation algorithms in the presence of increasing intensity noise. The test dataset was the ABO 275 μm data with added random noise. The relative noise strength was represented by the ratio of the standard deviation of the random noise amplitude to the mean fluorescence intensity. As expected, theF1 scores of all methods decreased as the noise amplitude grew. TheF1 of SUNS was greater than theF1’s of all other methods at all noise intensities.g-l, are in the same format of (a-f), but show the performance with the presence of increasing motion artifacts. The motion artifacts strength was represented by the standard deviation of the random movement amplitude (unit: pixels). As expected, theF1 scores of all methods decreased as the motion artifacts became stronger. TheF1 of SUNS was greater than theF1’s of all other methods at all motion amplitudes. STNeuroNet and CaImAn batch were the most sensitive to strong motion artifacts, likely because they rely on accurate 3D spatiotemporal structures of the video. On the contrary, SUNS relied more on the 2D spatial structure, so it retained the accuracy better when spatial structures changed position over different frames.
Extended Data Fig. 5 SUNS accurately mapped the spatial extent of each neuron even if the spatial footprints of neighboring cells overlapped.
SUNS segmented active neurons within each individual frame, and then accurately collected and merged the instances belonging to the same neurons. We selected two example pairs of overlapping neurons from the ABO video 539670003 identified by SUNS, and showed their traces and instances when they were activated independently.a, The SNR images of the region surrounding the selected neurons. The left image is the maximum projection of the SNR video over the entire recording time, which shows the two neurons were active and overlapping. The right images are single-frame SNR images at two different time points, each at the peak of a fluorescence transient where only one of the two neurons was active. The segmentation of each neuron generated by SUNS is shown as a contour with a different color. The scale bar is 3 μm.b, The temporal SNR traces of the selected neurons, matched to the colors of their contours in (a). Because the pairs of neurons overlapped, their fluorescence traces displayed substantial crosstalk. The dash markers above each trace show the active periods of each neuron determined by SUNS. The colored triangles below each trace indicate the manually-selected time of the single-frame images shown in(a).c-d, are parallel to (a-b), but for a different overlapping neuron pair.e, We quantified the ability to find overlapped neurons for each segmentation algorithm using the recall score. We divided the ground truth neurons in all the ABO videos into two groups: neurons without and with overlap with other neurons. We then computed the recall scores for both groups. The recall of SUNS on spatially overlapping neurons was not significantly lower (and was numerically higher) than the recall of SUNS on non-spatially overlapping neurons (P > 0.8, one-sided Wilcoxon rank-sum test,n = 10 videos; n.s.l. – not significantly lower). Therefore, the performance of SUNS on overlapped neurons was at least equally good as the performance of SUNS on non-overlapped neurons. Moreover, the recall scores of SUNS in both groups were comparable to or significantly higher than that of other methods in those groups (**P < 0.005, n.s. – not significant; two-sided Wilcoxon signed-rank test,n = 10 videos; error bars are s.d.). The gray dots represent the scores on the test data for each round of cross-validation.
Extended Data Fig. 6 Each pre-processing step and the CNN contributed to the accuracy of SUNS at the cost of lower speed.
We evaluated the contribution of each pre-processing option (spatial filtering, temporal filtering, and SNR normalization) and the CNN option to SUNS. The reference algorithm (SUNS) used all options except spatial filtering. We compared the performance of this reference algorithm to the performance with additional spatial filtering (optional SF), without temporal filtering (no TF), without SNR normalization (no SNR), and without the CNN (no CNN) when analyzing the ABO 275 μm dataset through ten-fold leave-one-out cross-validation.a, The recall, precision, andF1 score of these variants. The temporal filtering, SNR normalization, and CNN each significantly contributed to the overall accuracy, but the impact of spatial filtering was not significant (*P < 0.05, **P < 0.005, n.s. - not significant; two-sided Wilcoxon signed-rank test,n = 10 videos; error bars are s.d.). The gray dots represent the scores on the test data for each round of cross-validation.b, The speed andF1 score of these variants. Eliminating temporal filtering or the CNN significantly increased the speed, while adding spatial filtering or eliminating SNR normalization significantly lowered the speed (**P < 0.005; two-sided Wilcoxon signed-rank test,n = 10 videos; error bars are s.d.). The light color dots representF1 scores and speeds for the test data for each round of cross-validation. The execution of SNR normalization was fast (~0.07 ms/frame). However, eliminating SNR normalization led to a much lower optimalthprob, and thus increased the number of active pixels and decreased precision. In addition, ‘no SNR’ had lower speed than the complete SUNS algorithm due to the increased post-processing computation workload for managing the additional active pixels and regions.
Extended Data Fig. 7 The recall, precision, andF1 score of SUNS were superior to that of other methods on a variety of datasets.
a, Training on one ABO 275 μm video and testing on nine ABO 275 μm videos (each data point is the average over each set of nine test videos,n = 10);b, Training on ten ABO 275 μm videos and testing on ten ABO 175 μm videos (n = 10);c, Training on one Neurofinder video and testing on one paired Neurofinder video (n = 12);d, Training on three-quarters of one CaImAn video and testing on the remaining quarter of the same CaImAn video (n = 16). TheF1 scores of SUNS were mostly significantly higher than theF1 scores of other methods (*P < 0.05, **P < 0.005, ***P < 0.001, n.s. - not significant; two-sided Wilcoxon signed-rank test; error bars are s.d.). The gray dots represent the individual scores for each round of cross-validation.
Extended Data Fig. 8 SUNS online outperformed CaImAn Online in accuracy and speed when processing a variety of datasets.
a,e, Training on one ABO 275 μm video and testing on nine ABO 275 μm videos (each data point is the average over each set of nine test videos,n = 10);b,f, Training on ten ABO 275 μm videos and testing on ten ABO 175 μm videos (n = 10);c,g, Training on one Neurofinder video and testing on one paired Neurofinder video (n = 12);d,h, Training on three-quarters of one CaImAn video and testing on the remaining quarter of the same CaImAn video (n = 16). TheF1 score and processing speed of SUNS online were significantly higher than theF1 score and speed of CaImAn Online (**P < 0.005, ***P < 0.001; two-sided Wilcoxon signed-rank test; error bars are s.d.). The gray dots in (a-d) represent individual scores for each round of cross-validation. The light color dots in (e-g) representF1 scores and speeds for the test data for each round of cross-validation. The light color markers in (h) representF1 scores and speeds for the test data for each round of cross-validation performed on different CaImAn videos. We updated the baseline and noise regularly after initialization for the Neurofinder dataset, but did not do so for other datasets.
Extended Data Fig. 9 Changing the frequency of updating the neuron masks modulated trade-offs between SUNS online’s response time to new neurons and SUNS online’s performance metrics.
The (a-c)F1 score and (d-f) speed of SUNS online increased as the number of frames per update (nmerge) increased for the (a, d) ABO 275 μm, (b, e) Neurofinder, and (c, f) CaImAn datasets. The solid line is the average, and the shading is one s.d. from the average (n = 10, 12, and 16 cross-validation iterations for the three datasets). In (a-c), the green lines show theF1 score (solid) ± one s.d. (dashed) of SUNS batch. TheF1 score and speed generally increased asnmerge increased. For example, theF1 score and speed when usingnmerge = 500 were respectively higher than theF1 score and speed when usingnmerge = 20, and some of the differences were significant (*P < 0.05, **P < 0.005, ***P < 0.001, n.s. - not significant; two-sided Wilcoxon signed-rank test;n = 10, 12, and 16, respectively). We updated the baseline and noise regularly after initialization for the Neurofinder dataset, but did not do so for other datasets. Thenmerge was inversely proportional to the update frequency or the responsiveness of SUNS online to the appearance of new neurons. A trade-off exists between this responsiveness and the accuracy and speed of SUNS online. At the cost of less responsiveness, a highernmerge allowed the accumulation of temporal information and improved the accuracy of neuron segmentations. Likewise, a highernmerge improved the speed because it reduced the occurrence of computations for aggregating neurons.
Extended Data Fig. 10 Updating the baseline and noise after initialization increased the accuracy of SUNS online at the cost of lower speed.
We compared theF1 score and speed of SUNS online with or without baseline and noise update for the (a) ABO 275 μm, (b) Neurofinder, and (c) CaImAn datasets. TheF1 scores with baseline and noise update were generally higher, but the speeds were slower (*P < 0.05, **P < 0.005, ***P < 0.001, n.s. - not significant; two-sided Wilcoxon signed-rank test; error bars are s.d.). The light color dots representF1 scores and speeds for the test data for each round of cross-validation. The improvement in theF1 score was larger as the baseline fluctuation becomes more significant.d, Example processing time per frame of SUNS online with baseline and noise update on Neurofinder video 02.00. The lower inset zooms in on the data from the red box. The upper inset is the distribution of processing time per frame. The processing time per frame was consistently faster than the microscope recording rate (125 ms/frame). The first few frames after initialization were faster than the following frames, because the baseline and noise update was not performed in these frames.
Supplementary information
Supplementary Information
Supplementary Figs. 1–14, Tables 1–10 and Methods.
Supplementary Video 1
Demonstration of how SUNS online gradually found new neurons on an example raw video. The example video is selected frames in the fourth quadrant of the video YST from the CaImAn dataset. We showed the results of the SUNS online without the ‘tracking’ option enabled (left) and with the ‘tracking’ option enabled (right). The red contours were the segmented neurons from all frames before the current frame. We updated the identified neuron contours every second (ten frames), so the red neuron contours appeared with some delay after the neurons’ initial activation. The green contours in the right panels were the neurons found in previous frames and appeared as active in the current frames. Such tracked activity enables follow-up analysis of animal behaviours or brain network structures in real-time feedback neuroscience experiments.
Supplementary Video 2
Demonstration of how SUNS online gradually found new neurons on an example SNR video. The example video is selected frames in the fourth quadrant of the video YST from the CaImAn dataset after pre-processing and conversion to an SNR video. We showed the results of the SUNS online without the ‘tracking’ option enabled (left) and with the ‘tracking’ option enabled (right). The red contours were the segmented neurons from all frames before the current frame. We updated the identified neuron contours every second (ten frames), so the red neuron contours appeared with some delay after the neurons’ initial activation. The green contours in the right panels were the neurons found in previous frames and appeared as active in the current frames. Such tracked activity enables follow-up analysis of animal behaviours or brain network structures in real-time feedback neuroscience experiments.
Rights and permissions
About this article
Cite this article
Bao, Y., Soltanian-Zadeh, S., Farsiu, S.et al. Segmentation of neurons from fluorescence calcium recordings beyond real time.Nat Mach Intell3, 590–600 (2021). https://doi.org/10.1038/s42256-021-00342-x
Received:
Accepted:
Published:
Issue Date:
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
This article is cited by
Accurate neuron segmentation method for one-photon calcium imaging videos combining convolutional neural networks and clustering
- Yijun Bao
- Yiyang Gong
Communications Biology (2024)
An end-to-end recurrent compressed sensing method to denoise, detect and demix calcium imaging data
- Kangning Zhang
- Sean Tang
- Weijian Yang
Nature Machine Intelligence (2024)
On Optimizing Miniscope Data Analysis with Simulated Data: A Study of Parameter Optimization in the Minian Analysis Pipeline
- A. I. Erofeev
- M. V. Petrushan
- I. B. Bezprozvanny
Neuroscience and Behavioral Physiology (2024)
Rapid detection of neurons in widefield calcium imaging datasets after training with synthetic data
- Yuanlong Zhang
- Guoxun Zhang
- Qionghai Dai
Nature Methods (2023)
FIOLA: an accelerated pipeline for fluorescence imaging online analysis
- Changjia Cai
- Cynthia Dong
- Andrea Giovannucci
Nature Methods (2023)