Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

TransOrga: End-To-End Multi-modal Transformer-Based Organoid Segmentation

  • Conference paper
  • First Online:

Abstract

Organoid research plays an important role in drug screening and disease modeling. Obtaining accurate information about organoid morphology, number, and size is fundamental to this research. However, previous methods relied on fluorescence labeling which can harm organoids or have problems with accuracy and robustness. In this paper, we first introduce Transformer architecture into the organoid segmentation task and propose an end-to-end multi-modal method named TransOrga. To enhance the accuracy and robustness, we utilize a multi-modal feature extraction module to blend spatial and frequency domain features of organoid images. Furthermore, we propose a multi-branch aggregation decoder that learns diverse contexts from various Transformer layers to predict the segmentation mask progressively. In addition, we design a series of losses, including focal loss, dice loss, compact loss and auxiliary loss, to supervise our model to predict more accurate segmentation results with rational sizes and shapes. Our extensive experiments demonstrate that our method outperforms the baselines in organoid segmentation and provides an automatic, robust, and fluorescent-free tool for organoid research.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 14871
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 18589
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

References

  1. Kretzschmar, K., Clevers, H.: Organoids: modeling development and the stem cell niche in a dish. Dev. Cell38(6), 590–600 (2016)

    Article  Google Scholar 

  2. Dutta, D., Heo, I., Clevers, H.: Disease modeling in stem cell-derived 3D organoid systems. Trends Mol. Med.23(5), 393–410 (2017)

    Article  Google Scholar 

  3. Sachs, N., et al.: A living biobank of breast cancer organoids captures disease heterogeneity. Cell172(1–2), 373–386 (2018)

    Article  Google Scholar 

  4. Kim, S., et al.: Comparison of cell and organoid-level analysis of patient-derived 3D organoids to evaluate tumor cell growth dynamics and drug response. SLAS Discov.25(7), 744–754 (2020)

    Article  Google Scholar 

  5. Dekkers, J.F., et al.: High-resolution 3D imaging of fixed and cleared organoids. Nat. Protoc.14(6), 1756–1771 (2019)

    Article  Google Scholar 

  6. Hof, L., et al.: Long-term live imaging and multiscale analysis identify heterogeneity and core principles of epithelial organoid morphogenesis. BMC Biol.19, 1–22 (2021)

    Article  Google Scholar 

  7. Mead, B.E., et al.: Screening for modulators of the cellular composition of gut epithelia via organoid models of intestinal stem cell differentiation. Nat. Biomed. Eng.6(4), 476–494 (2022)

    Article  Google Scholar 

  8. Brandenberg, N., et al.: High-throughput automated organoid culture via stem-cell aggregation in microcavity arrays. Nat. Biomed. Eng.4(9), 863–874 (2020)

    Article  Google Scholar 

  9. Borten, M.A., et al.: Automated brightfield morphometry of 3D organoid populations by OrganoSeg. Sci. Rep.8(1), 5319 (2018)

    Article  Google Scholar 

  10. Kassis, T., et al.: OrgaQuant: human intestinal organoid localization and quantification using deep convolutional neural networks. Sci. Rep.9(1), 1–7 (2019)

    Article  Google Scholar 

  11. Kok, R.N.U., et al.: OrganoidTracker: efficient cell tracking using machine learning and manual error correction. PLoS ONE15(10), e0240802 (2020)

    Article  Google Scholar 

  12. Larsen, B.M., et al.: A pan-cancer organoid platform for precision medicine. Cell Rep.36(4), 109429 (2021)

    Article  Google Scholar 

  13. Matthews, J.M., et al.: OrganoID: a versatile deep learning platform for tracking and analysis of single-organoid dynamics. PLOS Comput. Biol.18(11), e1010584 (2022)

    Article  Google Scholar 

  14. Vaswani, A., et al.: Attention is all you need. In: NIPS (2017)

    Google Scholar 

  15. Devlin, J., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (2019)

    Google Scholar 

  16. Brown, T., et al.: Language models are few-shot learners. In: NeurIPS (2020)

    Google Scholar 

  17. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 213–229. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-58452-8_13

    Chapter  Google Scholar 

  18. Zhu, X., et al.: Deformable DETR: deformable transformers for end-to-end object detection. In: ICLR (2021)

    Google Scholar 

  19. Dosovitskiy, A., et al.: An image is worth 16 × 16 words: transformers for image recognition at scale. In: ICLR (2021)

    Google Scholar 

  20. Zheng, S., et al.: Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In: CVPR (2021)

    Google Scholar 

  21. Wang, H., et al.: MaX-DeepLab: end-to-end panoptic segmentation with mask transformers. In: CVPR (2021)

    Google Scholar 

  22. Wang, Y., et al.: End-to-end video instance segmentation with transformers. In: CVPR (2021)

    Google Scholar 

  23. Huang, L., Tan, J., Liu, J., Yuan, J.: Hand-transformer: non-autoregressive structured modeling for 3D hand pose estimation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12370, pp. 17–33. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-58595-2_2

    Chapter  Google Scholar 

  24. Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: CVPR (2021)

    Google Scholar 

  25. Liu, R., et al.: End-to-end lane shape prediction with transformers. In: CVPR (2021)

    Google Scholar 

  26. Valanarasu, J.M.J., Oza, P., Hacihaliloglu, I., Patel, V.M.: Medical transformer: gated axial-attention for medical image segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12901, pp. 36–46. Springer, Cham (2021).https://doi.org/10.1007/978-3-030-87193-2_4

    Chapter  Google Scholar 

  27. Cao, H., et al.: Swin-unet: Unet-like pure transformer for medical image segmentation. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds.) ECCV 2022. LNCS, vol. 13803, pp. 205–218. Springer, Cham (2022).https://doi.org/10.1007/978-3-031-25066-8_9

  28. Li, J., et al.: CDX-NET: cross-domain multi-feature fusion modeling via deep neural networks for multivariate time series forecasting in AIOps. In: ICASSP (2022)

    Google Scholar 

  29. Lin, T.-Y., et al.: Focal loss for dense object detection. In: ICCV (2017)

    Google Scholar 

  30. Milletari, F., Navab, N., Ahmadi, S.-A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 3DV (2016)

    Google Scholar 

  31. Li, W., Goodchild, M.F., Church, R.: An efficient measure of compactness for two-dimensional shapes and its application in regionalization problems. IJGIS27, 1227–1250 (2013)

    Google Scholar 

  32. Liu, Q., Dou, Q., Heng, P.-A.: Shape-aware meta-learning for generalizing prostate MRI segmentation to unseen domains. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12262, pp. 475–485. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-59713-9_46

  33. Paszke, A., et al.: Automatic differentiation in PyTorch (2017)

    Google Scholar 

  34. Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. TPAMI39, 2481–2495 (2017)

    Article  Google Scholar 

  35. Oktay, O., et al.: Attention U-net: learning where to look for the pancreas. arXiv preprintarXiv:1804.03999 (2018)

  36. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  37. He, K., et al.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

Download references

Funding

This work was supported by the Xinjiang Tianchi Talents Program (E33B9401).

Author information

Authors and Affiliations

  1. Shanghai Jiao Tong University, Shanghai, China

    Yiming Qin, Jiajia Li & Yulong Chen

  2. Shanghai Artificial Intelligence Research Institute, Shanghai, China

    Jiajia Li & Zikai Wang

  3. Northwestern Polytechnical University, Xi’an, China

    Yu-An Huang & Zhuhong You

  4. Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Shanghai, China

    Lun Hu & Pengwei Hu

  5. Merck KGaA, Darmstadt, Germany

    Feng Tan

Authors
  1. Yiming Qin

    You can also search for this author inPubMed Google Scholar

  2. Jiajia Li

    You can also search for this author inPubMed Google Scholar

  3. Yulong Chen

    You can also search for this author inPubMed Google Scholar

  4. Zikai Wang

    You can also search for this author inPubMed Google Scholar

  5. Yu-An Huang

    You can also search for this author inPubMed Google Scholar

  6. Zhuhong You

    You can also search for this author inPubMed Google Scholar

  7. Lun Hu

    You can also search for this author inPubMed Google Scholar

  8. Pengwei Hu

    You can also search for this author inPubMed Google Scholar

  9. Feng Tan

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toFeng Tan.

Editor information

Editors and Affiliations

  1. Department of Computer Science, Eastern Institute of Technology, Zhejiang, China

    De-Shuang Huang

  2. University of Wollongong, North Wollongong, NSW, Australia

    Prashan Premaratne

  3. Zhengzhou University of Light Industry, Zhengzhou, China

    Baohua Jin

  4. Zhong Yuan University of Technology, Zhengzhou, China

    Boyang Qu

  5. University of Ulsan, Ulsan, Korea (Republic of)

    Kang-Hyun Jo

  6. Department of Computer Science, Liverpool John Moores University, Liverpool, UK

    Abir Hussain

Rights and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Qin, Y.et al. (2023). TransOrga: End-To-End Multi-modal Transformer-Based Organoid Segmentation. In: Huang, DS., Premaratne, P., Jin, B., Qu, B., Jo, KH., Hussain, A. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2023. Lecture Notes in Computer Science, vol 14088. Springer, Singapore. https://doi.org/10.1007/978-981-99-4749-2_39

Download citation

Publish with us

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 14871
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 18589
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp