- Yiming Qin13,
- Jiajia Li13,14,
- Yulong Chen13,
- Zikai Wang14,
- Yu-An Huang15,
- Zhuhong You15,
- Lun Hu16,
- Pengwei Hu16 &
- …
- Feng Tan17
Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 14088))
Included in the following conference series:
1569Accesses
Abstract
Organoid research plays an important role in drug screening and disease modeling. Obtaining accurate information about organoid morphology, number, and size is fundamental to this research. However, previous methods relied on fluorescence labeling which can harm organoids or have problems with accuracy and robustness. In this paper, we first introduce Transformer architecture into the organoid segmentation task and propose an end-to-end multi-modal method named TransOrga. To enhance the accuracy and robustness, we utilize a multi-modal feature extraction module to blend spatial and frequency domain features of organoid images. Furthermore, we propose a multi-branch aggregation decoder that learns diverse contexts from various Transformer layers to predict the segmentation mask progressively. In addition, we design a series of losses, including focal loss, dice loss, compact loss and auxiliary loss, to supervise our model to predict more accurate segmentation results with rational sizes and shapes. Our extensive experiments demonstrate that our method outperforms the baselines in organoid segmentation and provides an automatic, robust, and fluorescent-free tool for organoid research.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 14871
- Price includes VAT (Japan)
- Softcover Book
- JPY 18589
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Kretzschmar, K., Clevers, H.: Organoids: modeling development and the stem cell niche in a dish. Dev. Cell38(6), 590–600 (2016)
Dutta, D., Heo, I., Clevers, H.: Disease modeling in stem cell-derived 3D organoid systems. Trends Mol. Med.23(5), 393–410 (2017)
Sachs, N., et al.: A living biobank of breast cancer organoids captures disease heterogeneity. Cell172(1–2), 373–386 (2018)
Kim, S., et al.: Comparison of cell and organoid-level analysis of patient-derived 3D organoids to evaluate tumor cell growth dynamics and drug response. SLAS Discov.25(7), 744–754 (2020)
Dekkers, J.F., et al.: High-resolution 3D imaging of fixed and cleared organoids. Nat. Protoc.14(6), 1756–1771 (2019)
Hof, L., et al.: Long-term live imaging and multiscale analysis identify heterogeneity and core principles of epithelial organoid morphogenesis. BMC Biol.19, 1–22 (2021)
Mead, B.E., et al.: Screening for modulators of the cellular composition of gut epithelia via organoid models of intestinal stem cell differentiation. Nat. Biomed. Eng.6(4), 476–494 (2022)
Brandenberg, N., et al.: High-throughput automated organoid culture via stem-cell aggregation in microcavity arrays. Nat. Biomed. Eng.4(9), 863–874 (2020)
Borten, M.A., et al.: Automated brightfield morphometry of 3D organoid populations by OrganoSeg. Sci. Rep.8(1), 5319 (2018)
Kassis, T., et al.: OrgaQuant: human intestinal organoid localization and quantification using deep convolutional neural networks. Sci. Rep.9(1), 1–7 (2019)
Kok, R.N.U., et al.: OrganoidTracker: efficient cell tracking using machine learning and manual error correction. PLoS ONE15(10), e0240802 (2020)
Larsen, B.M., et al.: A pan-cancer organoid platform for precision medicine. Cell Rep.36(4), 109429 (2021)
Matthews, J.M., et al.: OrganoID: a versatile deep learning platform for tracking and analysis of single-organoid dynamics. PLOS Comput. Biol.18(11), e1010584 (2022)
Vaswani, A., et al.: Attention is all you need. In: NIPS (2017)
Devlin, J., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (2019)
Brown, T., et al.: Language models are few-shot learners. In: NeurIPS (2020)
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 213–229. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-58452-8_13
Zhu, X., et al.: Deformable DETR: deformable transformers for end-to-end object detection. In: ICLR (2021)
Dosovitskiy, A., et al.: An image is worth 16 × 16 words: transformers for image recognition at scale. In: ICLR (2021)
Zheng, S., et al.: Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In: CVPR (2021)
Wang, H., et al.: MaX-DeepLab: end-to-end panoptic segmentation with mask transformers. In: CVPR (2021)
Wang, Y., et al.: End-to-end video instance segmentation with transformers. In: CVPR (2021)
Huang, L., Tan, J., Liu, J., Yuan, J.: Hand-transformer: non-autoregressive structured modeling for 3D hand pose estimation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12370, pp. 17–33. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-58595-2_2
Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: CVPR (2021)
Liu, R., et al.: End-to-end lane shape prediction with transformers. In: CVPR (2021)
Valanarasu, J.M.J., Oza, P., Hacihaliloglu, I., Patel, V.M.: Medical transformer: gated axial-attention for medical image segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12901, pp. 36–46. Springer, Cham (2021).https://doi.org/10.1007/978-3-030-87193-2_4
Cao, H., et al.: Swin-unet: Unet-like pure transformer for medical image segmentation. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds.) ECCV 2022. LNCS, vol. 13803, pp. 205–218. Springer, Cham (2022).https://doi.org/10.1007/978-3-031-25066-8_9
Li, J., et al.: CDX-NET: cross-domain multi-feature fusion modeling via deep neural networks for multivariate time series forecasting in AIOps. In: ICASSP (2022)
Lin, T.-Y., et al.: Focal loss for dense object detection. In: ICCV (2017)
Milletari, F., Navab, N., Ahmadi, S.-A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 3DV (2016)
Li, W., Goodchild, M.F., Church, R.: An efficient measure of compactness for two-dimensional shapes and its application in regionalization problems. IJGIS27, 1227–1250 (2013)
Liu, Q., Dou, Q., Heng, P.-A.: Shape-aware meta-learning for generalizing prostate MRI segmentation to unseen domains. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12262, pp. 475–485. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-59713-9_46
Paszke, A., et al.: Automatic differentiation in PyTorch (2017)
Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. TPAMI39, 2481–2495 (2017)
Oktay, O., et al.: Attention U-net: learning where to look for the pancreas. arXiv preprintarXiv:1804.03999 (2018)
Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).https://doi.org/10.1007/978-3-319-24574-4_28
He, K., et al.: Deep residual learning for image recognition. In: CVPR (2016)
Funding
This work was supported by the Xinjiang Tianchi Talents Program (E33B9401).
Author information
Authors and Affiliations
Shanghai Jiao Tong University, Shanghai, China
Yiming Qin, Jiajia Li & Yulong Chen
Shanghai Artificial Intelligence Research Institute, Shanghai, China
Jiajia Li & Zikai Wang
Northwestern Polytechnical University, Xi’an, China
Yu-An Huang & Zhuhong You
Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Shanghai, China
Lun Hu & Pengwei Hu
Merck KGaA, Darmstadt, Germany
Feng Tan
- Yiming Qin
You can also search for this author inPubMed Google Scholar
- Jiajia Li
You can also search for this author inPubMed Google Scholar
- Yulong Chen
You can also search for this author inPubMed Google Scholar
- Zikai Wang
You can also search for this author inPubMed Google Scholar
- Yu-An Huang
You can also search for this author inPubMed Google Scholar
- Zhuhong You
You can also search for this author inPubMed Google Scholar
- Lun Hu
You can also search for this author inPubMed Google Scholar
- Pengwei Hu
You can also search for this author inPubMed Google Scholar
- Feng Tan
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toFeng Tan.
Editor information
Editors and Affiliations
Department of Computer Science, Eastern Institute of Technology, Zhejiang, China
De-Shuang Huang
University of Wollongong, North Wollongong, NSW, Australia
Prashan Premaratne
Zhengzhou University of Light Industry, Zhengzhou, China
Baohua Jin
Zhong Yuan University of Technology, Zhengzhou, China
Boyang Qu
University of Ulsan, Ulsan, Korea (Republic of)
Kang-Hyun Jo
Department of Computer Science, Liverpool John Moores University, Liverpool, UK
Abir Hussain
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Qin, Y.et al. (2023). TransOrga: End-To-End Multi-modal Transformer-Based Organoid Segmentation. In: Huang, DS., Premaratne, P., Jin, B., Qu, B., Jo, KH., Hussain, A. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2023. Lecture Notes in Computer Science, vol 14088. Springer, Singapore. https://doi.org/10.1007/978-981-99-4749-2_39
Download citation
Published:
Publisher Name:Springer, Singapore
Print ISBN:978-981-99-4748-5
Online ISBN:978-981-99-4749-2
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative