Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 14092))
Included in the following conference series:
226Accesses
Abstract
Domain adaptation has been widely adopted to transfer styles across multi-vendors and multi-centers, as well as to complement the missing modalities. In this challenge, we proposed an unsupervised domain adaptation framework for cross-modality vestibular schwannoma (VS) and cochlea segmentation and Koos grade prediction. We learn the shared representation from both ceT1 and hrT2 images and recover another modality from the latent representation, and we also utilize proxy tasks of VS segmentation and brain parcellation to restrict the consistency of image structures in domain adaptation. After generating missing modalities, the nnU-Net model is utilized for VS and cochlea segmentation, while a semi-supervised contrastive learning pre-train approach is employed to improve the model performance for Koos grade prediction. On CrossMoDA validation phase Leaderboard, our method received rank 4 in task1 with a mean Dice score of 0.8394 and rank 2 in task2 with Macro-Average Mean Square Error of 0.3941. Our code is available athttps://github.com/fiy2W/cmda2022.superpolymerization.
L. Han and Y. Huang—Contributed equally to this work.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 7435
- Price includes VAT (Japan)
- Softcover Book
- JPY 9294
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bachman, P., Hjelm, R.D., Buchwalter, W.: Learning representations by maximizing mutual information across views. In: Advances in Neural Information Processing Systems 32 (2019)
Cardoso, M.J., et al.: Geodesic information flows: spatially-variant graphs and their application to segmentation and fusion. IEEE Trans. Med. Imaging34(9), 1976–1988 (2015)
Chen, R., Huang, W., Huang, B., Sun, F., Fang, B.: Reusing discriminators for encoding: towards unsupervised image-to-image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision And Pattern Recognition, pp. 8168–8177 (2020)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)
Choi, J.W.: Using out-of-the-box frameworks for contrastive unpaired image translation for vestibular schwannoma and cochlea segmentation: an approach for the crossmoda challenge. In: Crimi, A., Bakas, S. (eds.) Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 7th International Workshop, BrainLes 2021, Held in Conjunction with MICCAI 2021, Virtual Event, September 27, 2021, Revised Selected Papers, Part II, pp. 509–517. Springer International Publishing, Cham (2022).https://doi.org/10.1007/978-3-031-09002-8_44
Dong, H., Yu, F., Zhao, J., Dong, B., Zhang, L.: Unsupervised domain adaptation in semantic segmentation based on pixel alignment and self-training. arXiv preprintarXiv:2109.14219 (2021)
Dorent, R., et al.: Scribble-based domain adaptation via co-segmentation. In: Martel, A.L., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I, pp. 479–489. Springer International Publishing, Cham (2020).https://doi.org/10.1007/978-3-030-59710-8_47
Dorent, R., et al.: Crossmoda 2021 challenge: benchmark of cross-modality domain adaptation techniques for vestibular schwnannoma and cochlea segmentation. Med. Image Anal.83, 102628 (2023)
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)
Huang, S.C., Shen, L., Lungren, M.P., Yeung, S.: Gloria: a multimodal global-local representation learning framework for label-efficient medical image recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3942–3951 (2021)
Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods18(2), 203–211 (2021)
Khosla, P., et al.: Supervised contrastive learning. Adv. Neural. Inf. Process. Syst.33, 18661–18673 (2020)
Kujawa, A., et al.: Automated koos classification of vestibular schwannoma. Front. Radiol.2, 837191 (2022)
Shapey, J., et al.: Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm. Sci. Data8(1), 1–6 (2021)
Shin, H., Kim, H., Kim, S., Jun, Y., Eo, T., Hwang, D.: Cosmos: cross-modality unsupervised domain adaptation for 3d medical image segmentation based on target-aware domain translation and iterative self-training. arXiv preprintarXiv:2203.16557 (2022)
Wang, X., Yao, L., Rekik, I., Zhang, Yu.: Contrastive functional connectivity graph learning for population-based fMRI classification. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part I, pp. 221–230. Springer Nature Switzerland, Cham (2022).https://doi.org/10.1007/978-3-031-16431-6_21
Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3733–3742 (2018)
You, K., Lee, S., Jo, K., Park, E., Kooi, T., Nam, H.: Intra-class contrastive learning improves computer aided diagnosis of breast cancer in mammography. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part III, pp. 55–64. Springer Nature Switzerland, Cham (2022).https://doi.org/10.1007/978-3-031-16437-8_6
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
Acknowledgement
Luyi Han was funded by Chinese Scholarship Council (CSC) scholarship. This work was supported by the National Natural Science Foundation of China under Grant No. 62101365 and the startup foundation of Nanjing University of Information Science and Technology.
Author information
Authors and Affiliations
Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein 10, 6525, Nijmegen, GA, The Netherlands
Luyi Han & Ritse Mann
Faculty of Applied Sciences, Macao Polytechnic University, Macao, 999078, Macao, Special Administrative Region of China
Tao Tan
Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, Amsterdam, CX, The Netherlands
Luyi Han, Tao Tan & Ritse Mann
School of Automation, Nanjing University of Information Science and Technology, Nanjing, 210044, China
Yunzhi Huang
- Luyi Han
You can also search for this author inPubMed Google Scholar
- Yunzhi Huang
You can also search for this author inPubMed Google Scholar
- Tao Tan
You can also search for this author inPubMed Google Scholar
- Ritse Mann
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toTao Tan.
Editor information
Editors and Affiliations
University of Pennsylvania, Philadelphia, PA, USA
Spyridon Bakas
Sano, Center for Computational Personalised Medicine, Kraków, Poland
Alessandro Crimi
University of Pennsylvania, Philadelphia, PA, USA
Ujjwal Baid
Sano, Center for Computational Personalised Medicine, Kraków, Poland
Sylwia Malec
Sano, Center for Computational Personalised Medicine, Kraków, Poland
Monika Pytlarz
University of Pennsylvania, Philadelphia, PA, USA
Bhakti Baheti
German Cancer Research Center, Heidelberg, Germany
Maximilian Zenk
Harvard Medical School, Boston, MA, USA
Reuben Dorent
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Han, L., Huang, Y., Tan, T., Mann, R. (2023). Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma Segmentation and Koos Grade Prediction Based on Semi-supervised Contrastive Learning. In: Bakas, S.,et al. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2022. Lecture Notes in Computer Science, vol 14092. Springer, Cham. https://doi.org/10.1007/978-3-031-44153-0_5
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-031-44152-3
Online ISBN:978-3-031-44153-0
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative