- Xinquan Yang14,15,16,
- Jinheng Xie14,15,16,18,
- Xuguang Li17,
- Xuechen Li14,15,16,
- Xin Li17,
- Linlin Shen14,15,16 &
- …
- Yongqiang Deng17
Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 14225))
Included in the following conference series:
6044Accesses
Abstract
When deep neural network has been proposed to assist the dentist in designing the location of dental implant, most of them are targeting simple cases where only one missing tooth is available. As a result, literature works do not work well when there are multiple missing teeth and easily generate false predictions when the teeth are sparsely distributed. In this paper, we are trying to integrate a weak supervision text, the target region, to the implant position regression network, to address above issues. We propose a text condition embedded implant position regression network (TCEIP), to embed the text condition into the encoder-decoder framework for improvement of the regression performance. A cross-modal interaction that consists of cross-modal attention (CMA) and knowledge alignment module (KAM) is proposed to facilitate the interaction between features of images and texts. The CMA module performs a cross-attention between the image feature and the text condition, and the KAM mitigates the knowledge gap between the image feature and the image encoder of the CLIP. Extensive experiments on a dental implant dataset through five-fold cross-validation demonstrated that the proposed TCEIP achieves superior performance than existing methods.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 13727
- Price includes VAT (Japan)
- Softcover Book
- JPY 17159
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprintarXiv:2010.11929 (2020)
Elani, H., Starr, J., Da Silva, J., Gallucci, G.: Trends in dental implant use in the US, 1999–2016, and projections to 2026. J. Dent. Res.97(13), 1424–1430 (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Kurt Bayrakdar, S., et al.: A deep learning approach for dental implant planning in cone-beam computed tomography images. BMC Med. Imaging21(1), 86 (2021)
Law, H., Deng, J.: Cornernet: detecting objects as paired keypoints. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 734–750 (2018)
Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)
Liu, Y., Chen, Z.C., Chu, C.H., Deng, F.L.: Transfer learning via artificial intelligence for guiding implant placement in the posterior mandible: an in vitro study (2021)
Nazir, M., Al-Ansari, A., Al-Khalifa, K., Alhareky, M., Gaffar, B., Almas, K.: Global prevalence of periodontal disease and lack of its surveillance. Sci. World J. 2020 (2020)
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)
Rasheed, H., Maaz, M., Khattak, M.U., Khan, S., Khan, F.S.: Bridging the gap between object and image-level representations for open-vocabulary detection. arXiv preprintarXiv:2207.03482 (2022)
Widiasri, M., et al.: Dental-yolo: alveolar bone and mandibular canal detection on cone beam computed tomography images for dental implant planning. IEEE Access10, 101483–101494 (2022)
Xie, J., Hou, X., Ye, K., Shen, L.: Clims: cross language image matching for weakly supervised semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4483–4492 (2022)
Yang, X., et al.: ImplantFormer: vision transformer based implant position regression using dental CBCT data. arXiv preprintarXiv:2210.16467 (2022)
Yang, Z., Liu, S., Hu, H., Wang, L., Lin, S.: RepPoints: point set representation for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9657–9666 (2019)
Zhang, H., Wang, Y., Dayoub, F., Sunderhauf, N.: VarifocalNet: an IoU-aware dense object detector. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8514–8523 (2021)
Zhang, S., Chi, C., Yao, Y., Lei, Z., Li, S.Z.: Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9759–9768 (2020)
Zhou, X., Girdhar, R., Joulin, A., Krähenbühl, P., Misra, I.: Detecting twenty-thousand classes using image-level supervision. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 1363, pp. 350–368. Springer, Cham (2022)
Zhou, X., Wang, D., Krähenbühl, P.: Objects as points. arXiv preprintarXiv:1904.07850 (2019)
Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable DETR: deformable transformers for end-to-end object detection. arXiv preprintarXiv:2010.04159 (2020)
Acknowledgments
This work was supported by the National Natural Science Foundation of China under Grant 82261138629; Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515010688 and 2021A1515220072; Shenzhen Municipal Science and Technology Innovation Council under Grant JCYJ2022053110141 2030 and JCYJ20220530155811025.
Author information
Authors and Affiliations
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
Xinquan Yang, Jinheng Xie, Xuechen Li & Linlin Shen
AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen, China
Xinquan Yang, Jinheng Xie, Xuechen Li & Linlin Shen
National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University, Shenzhen, China
Xinquan Yang, Jinheng Xie, Xuechen Li & Linlin Shen
Department of Stomatology, Shenzhen University General Hospital, Shenzhen, China
Xuguang Li, Xin Li & Yongqiang Deng
National University of Singapore, Singapore, Singapore
Jinheng Xie
- Xinquan Yang
You can also search for this author inPubMed Google Scholar
- Jinheng Xie
You can also search for this author inPubMed Google Scholar
- Xuguang Li
You can also search for this author inPubMed Google Scholar
- Xuechen Li
You can also search for this author inPubMed Google Scholar
- Xin Li
You can also search for this author inPubMed Google Scholar
- Linlin Shen
You can also search for this author inPubMed Google Scholar
- Yongqiang Deng
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toLinlin Shen.
Editor information
Editors and Affiliations
Icahn School of Medicine, Mount Sinai, NYC, NY, USA, Tel Aviv University, Tel Aviv, Israel
Hayit Greenspan
Emory University, Atlanta, GA, USA
Anant Madabhushi
Queen’s University, Kingston, ON, Canada
Parvin Mousavi
The University of British Columbia, Vancouver, BC, Canada
Septimiu Salcudean
Yale University, New Haven, CT, USA
James Duncan
IBM Research, San Jose, CA, USA
Tanveer Syeda-Mahmood
Johns Hopkins University, Baltimore, MD, USA
Russell Taylor
1Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, X.et al. (2023). TCEIP: Text Condition Embedded Regression Network for Dental Implant Position Prediction. In: Greenspan, H.,et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14225. Springer, Cham. https://doi.org/10.1007/978-3-031-43987-2_31
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-031-43986-5
Online ISBN:978-3-031-43987-2
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative