Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma Segmentation and Koos Grade Prediction Based on Semi-supervised Contrastive Learning

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 14092))

Included in the following conference series:

  • 226Accesses

Abstract

Domain adaptation has been widely adopted to transfer styles across multi-vendors and multi-centers, as well as to complement the missing modalities. In this challenge, we proposed an unsupervised domain adaptation framework for cross-modality vestibular schwannoma (VS) and cochlea segmentation and Koos grade prediction. We learn the shared representation from both ceT1 and hrT2 images and recover another modality from the latent representation, and we also utilize proxy tasks of VS segmentation and brain parcellation to restrict the consistency of image structures in domain adaptation. After generating missing modalities, the nnU-Net model is utilized for VS and cochlea segmentation, while a semi-supervised contrastive learning pre-train approach is employed to improve the model performance for Koos grade prediction. On CrossMoDA validation phase Leaderboard, our method received rank 4 in task1 with a mean Dice score of 0.8394 and rank 2 in task2 with Macro-Average Mean Square Error of 0.3941. Our code is available athttps://github.com/fiy2W/cmda2022.superpolymerization.

L. Han and Y. Huang—Contributed equally to this work.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 7435
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 9294
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

References

  1. Bachman, P., Hjelm, R.D., Buchwalter, W.: Learning representations by maximizing mutual information across views. In: Advances in Neural Information Processing Systems 32 (2019)

    Google Scholar 

  2. Cardoso, M.J., et al.: Geodesic information flows: spatially-variant graphs and their application to segmentation and fusion. IEEE Trans. Med. Imaging34(9), 1976–1988 (2015)

    Article  Google Scholar 

  3. Chen, R., Huang, W., Huang, B., Sun, F., Fang, B.: Reusing discriminators for encoding: towards unsupervised image-to-image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision And Pattern Recognition, pp. 8168–8177 (2020)

    Google Scholar 

  4. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)

    Google Scholar 

  5. Choi, J.W.: Using out-of-the-box frameworks for contrastive unpaired image translation for vestibular schwannoma and cochlea segmentation: an approach for the crossmoda challenge. In: Crimi, A., Bakas, S. (eds.) Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 7th International Workshop, BrainLes 2021, Held in Conjunction with MICCAI 2021, Virtual Event, September 27, 2021, Revised Selected Papers, Part II, pp. 509–517. Springer International Publishing, Cham (2022).https://doi.org/10.1007/978-3-031-09002-8_44

    Chapter  Google Scholar 

  6. Dong, H., Yu, F., Zhao, J., Dong, B., Zhang, L.: Unsupervised domain adaptation in semantic segmentation based on pixel alignment and self-training. arXiv preprintarXiv:2109.14219 (2021)

  7. Dorent, R., et al.: Scribble-based domain adaptation via co-segmentation. In: Martel, A.L., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I, pp. 479–489. Springer International Publishing, Cham (2020).https://doi.org/10.1007/978-3-030-59710-8_47

    Chapter  Google Scholar 

  8. Dorent, R., et al.: Crossmoda 2021 challenge: benchmark of cross-modality domain adaptation techniques for vestibular schwnannoma and cochlea segmentation. Med. Image Anal.83, 102628 (2023)

    Article  Google Scholar 

  9. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)

    Google Scholar 

  10. Huang, S.C., Shen, L., Lungren, M.P., Yeung, S.: Gloria: a multimodal global-local representation learning framework for label-efficient medical image recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3942–3951 (2021)

    Google Scholar 

  11. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods18(2), 203–211 (2021)

    Article  Google Scholar 

  12. Khosla, P., et al.: Supervised contrastive learning. Adv. Neural. Inf. Process. Syst.33, 18661–18673 (2020)

    Google Scholar 

  13. Kujawa, A., et al.: Automated koos classification of vestibular schwannoma. Front. Radiol.2, 837191 (2022)

    Google Scholar 

  14. Shapey, J., et al.: Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm. Sci. Data8(1), 1–6 (2021)

    Article  Google Scholar 

  15. Shin, H., Kim, H., Kim, S., Jun, Y., Eo, T., Hwang, D.: Cosmos: cross-modality unsupervised domain adaptation for 3d medical image segmentation based on target-aware domain translation and iterative self-training. arXiv preprintarXiv:2203.16557 (2022)

  16. Wang, X., Yao, L., Rekik, I., Zhang, Yu.: Contrastive functional connectivity graph learning for population-based fMRI classification. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part I, pp. 221–230. Springer Nature Switzerland, Cham (2022).https://doi.org/10.1007/978-3-031-16431-6_21

    Chapter  Google Scholar 

  17. Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3733–3742 (2018)

    Google Scholar 

  18. You, K., Lee, S., Jo, K., Park, E., Kooi, T., Nam, H.: Intra-class contrastive learning improves computer aided diagnosis of breast cancer in mammography. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part III, pp. 55–64. Springer Nature Switzerland, Cham (2022).https://doi.org/10.1007/978-3-031-16437-8_6

    Chapter  Google Scholar 

  19. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

Download references

Acknowledgement

Luyi Han was funded by Chinese Scholarship Council (CSC) scholarship. This work was supported by the National Natural Science Foundation of China under Grant No. 62101365 and the startup foundation of Nanjing University of Information Science and Technology.

Author information

Authors and Affiliations

  1. Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein 10, 6525, Nijmegen, GA, The Netherlands

    Luyi Han & Ritse Mann

  2. Faculty of Applied Sciences, Macao Polytechnic University, Macao, 999078, Macao, Special Administrative Region of China

    Tao Tan

  3. Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, Amsterdam, CX, The Netherlands

    Luyi Han, Tao Tan & Ritse Mann

  4. School of Automation, Nanjing University of Information Science and Technology, Nanjing, 210044, China

    Yunzhi Huang

Authors
  1. Luyi Han

    You can also search for this author inPubMed Google Scholar

  2. Yunzhi Huang

    You can also search for this author inPubMed Google Scholar

  3. Tao Tan

    You can also search for this author inPubMed Google Scholar

  4. Ritse Mann

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toTao Tan.

Editor information

Editors and Affiliations

  1. University of Pennsylvania, Philadelphia, PA, USA

    Spyridon Bakas

  2. Sano, Center for Computational Personalised Medicine, Kraków, Poland

    Alessandro Crimi

  3. University of Pennsylvania, Philadelphia, PA, USA

    Ujjwal Baid

  4. Sano, Center for Computational Personalised Medicine, Kraków, Poland

    Sylwia Malec

  5. Sano, Center for Computational Personalised Medicine, Kraków, Poland

    Monika Pytlarz

  6. University of Pennsylvania, Philadelphia, PA, USA

    Bhakti Baheti

  7. German Cancer Research Center, Heidelberg, Germany

    Maximilian Zenk

  8. Harvard Medical School, Boston, MA, USA

    Reuben Dorent

Rights and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Han, L., Huang, Y., Tan, T., Mann, R. (2023). Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma Segmentation and Koos Grade Prediction Based on Semi-supervised Contrastive Learning. In: Bakas, S.,et al. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2022. Lecture Notes in Computer Science, vol 14092. Springer, Cham. https://doi.org/10.1007/978-3-031-44153-0_5

Download citation

Publish with us

Societies and partnerships

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 7435
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 9294
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp