Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Prompt Contrastive Learning Relation Extraction Method by Updating the Representation of Relation Label Words

  • Conference paper
  • First Online:

Abstract

Relation extraction is a core task in natural language processing, focusing on predicting the relation labels between given entities in a text. However, existing relation extraction models face several challenges, including insufficient logical reasoning, inadequate semantic information in relation labels, and being prone to misclassification. To address these issues, we propose theUpdating Relation Label Word RepresentationsPromptContrastiveLearning(UPCL) Framework. The framework (1) designs a novel template that provides explicit reasoning steps and can improve the ability of the model to perform complex reasoning. Within this framework, (2) the representation of relation label word is updated by using sentence information in the training set, (3) furthermore the representation is trained using a contrastive learning strategy. Experimental results show that our model has demonstrated improved performance on three relation extraction datasets, proving the effectiveness of our model. To verify the model’s generalization capability, we also design multiple experiments for different scenarios, and experiments demonstrate that UPCL significantly outperforms baselines in various datasets.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 7550
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 9437
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Similar content being viewed by others

References

  1. Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022)

    Google Scholar 

  2. Cohen, A.D., Rosenman, S., Goldberg, Y.: Relation classification as two-way span-prediction. arXiv preprintarXiv:2010.04829 (2020)

  3. Gao, T., Yao, X., Chen, D.: Simcse: Simple contrastive learning of sentence embeddings. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 6894–6910 (2021)

    Google Scholar 

  4. Han, X., Zhao, W., Ding, N., Liu, Z., Sun, M.: Ptr: Prompt tuning with rules for text classification. AI Open3, 182–192 (2022)

    Article  Google Scholar 

  5. Hendrickx, I., Kim, S.N., Kozareva, Z., Nakov, P., Séaghdha, D.Ó., Padó, S., Pennacchiotti, M., Romano, L., Szpakowicz, S.: Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In: Proceedings of the 5th International Workshop on Semantic Evaluation, pp. 33–38 (2010)

    Google Scholar 

  6. Hu, S., Ding, N., Wang, H., Liu, Z., Wang, J., Li, J., Wu, W., Sun, M.: Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 2225–2240 (2022)

    Google Scholar 

  7. Joshi, M., Chen, D., Liu, Y., Weld, D.S., Zettlemoyer, L., Levy, O.: Spanbert: Improving pre-training by representing and predicting spans. Transa. Assoc. Comput. Llinguist.8, 64–77 (2020)

    Article  Google Scholar 

  8. Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Adv. Neural. Inf. Process. Syst.33, 18661–18673 (2020)

    Google Scholar 

  9. Kojima, T., Gu, S.S., Reid, M., Matsuo, Y., Iwasawa, Y.: Large language models are zero-shot reasoners. Adv. Neural. Inf. Process. Syst.35, 22199–22213 (2022)

    Google Scholar 

  10. Li, Y., Xu, C., Long, G., Shen, T., Tao, C., Jiang, J.: CCPrefix: Counterfactual contrastive prefix-tuning for many-class classification. In: Graham, Y., Purver, M. (eds.) Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2977–2988. Association for Computational Linguistics, St. Julian’s, Malta (2024),https://aclanthology.org/2024.eacl-long.181

  11. Liang, X., Wu, S., Li, M., Li, Z.: Modeling multi-granularity hierarchical features for relation extraction. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5088–5098 (2022)

    Google Scholar 

  12. Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv.55(9), 1–35 (2023)

    Article  Google Scholar 

  13. Liu, S., Hu, X., Zhang, C., Wen, L., Philip, S.Y., et al.: Hiure: Hierarchical exemplar contrastive learning for unsupervised relation extraction. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5970–5980 (2022)

    Google Scholar 

  14. Liu, Y., Hu, J., Wan, X., Chang, T.H.: A simple yet effective relation information guided approach for few-shot relation extraction. In: Findings of the Association for Computational Linguistics: ACL 2022, pp. 757–763 (2022)

    Google Scholar 

  15. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprintarXiv:1907.11692 (2019)

  16. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 8026–8037 (2019)

    Google Scholar 

  17. Peters, M.E., Neumann, M., Logan, R., Schwartz, R., Joshi, V., Singh, S., Smith, N.A.: Knowledge enhanced contextual word representations. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 43–54 (2019)

    Google Scholar 

  18. Soares, L.B., Fitzgerald, N., Ling, J., Kwiatkowski, T.: Matching the blanks: Distributional similarity for relation learning. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2895–2905 (2019)

    Google Scholar 

  19. Stoica, G., Platanios, E.A., Póczos, B.: Re-tacred: Addressing shortcomings of the tacred dataset. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 35, pp. 13843–13850 (2021)

    Google Scholar 

  20. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q.V., Zhou, D., et al.: Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural. Inf. Process. Syst.35, 24824–24837 (2022)

    Google Scholar 

  21. Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., et al.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pp. 38–45 (2020)

    Google Scholar 

  22. Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3733–3742 (2018),https://api.semanticscholar.org/CorpusID:4591284

  23. Xue, F., Sun, A., Zhang, H., Chng, E.S.: Gdpnet: Refining latent multi-view graph for relation extraction. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 35, pp. 14194–14202 (2021)

    Google Scholar 

  24. Yamada, I., Asai, A., Shindo, H., Takeda, H., Matsumoto, Y.: Luke: Deep contextualized entity representations with entity-aware self-attention. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6442–6454 (2020)

    Google Scholar 

  25. Zhang, Y., Zhong, V., Chen, D., Angeli, G., Manning, C.D.: Position-aware attention and supervised data improve slot filling. In: Conference on Empirical Methods in Natural Language Processing, pp. 35–45 (2017)

    Google Scholar 

  26. Zhao, J., Zhan, W., Zhao, X., Zhang, Q., Gui, T., Wei, Z., Wang, J., Peng, M., Sun, M.: Re-matching: A fine-grained semantic matching method for zero-shot relation extraction. arXiv preprintarXiv:2306.04954 (2023)

  27. Zhao, K., Xu, H., Yang, J., Gao, K.: Consistent representation learning for continual relation extraction. In: Findings of the Association for Computational Linguistics: ACL 2022, pp. 3402–3411 (2022)

    Google Scholar 

Download references

Acknowledgments

This work is supported by Science and Technology Development Plan Project of Jilin Province [20220203127SF], the National Natural Science Foundation of China [grant number 62162062], The school-enterprise cooperation project of Yanbian University [2024-10].

Author information

Authors and Affiliations

  1. Intelligent Information Processing Lab, Department of Computer Science & Technology, Yanbian University, Yanji, 133002, China

    Yuanru Wang, Yahui Zhao, Guozhe Jin, Zhenguo Zhang & Rongyi Cui

  2. Department of Spine Surgery, China-Japan Union Hospital of Jilin University, Changchun, 130033, China

    Fei Yin

  3. School of Information Technology, Deakin University, Geelong, Australia

    Man Li

Authors
  1. Yuanru Wang

    You can also search for this author inPubMed Google Scholar

  2. Yahui Zhao

    You can also search for this author inPubMed Google Scholar

  3. Guozhe Jin

    You can also search for this author inPubMed Google Scholar

  4. Zhenguo Zhang

    You can also search for this author inPubMed Google Scholar

  5. Fei Yin

    You can also search for this author inPubMed Google Scholar

  6. Rongyi Cui

    You can also search for this author inPubMed Google Scholar

  7. Man Li

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toYahui Zhao.

Editor information

Editors and Affiliations

  1. Macquarie University, Sydney, NSW, Australia

    Quan Z. Sheng

  2. University of Auckland, Auckland, New Zealand

    Gill Dobbie

  3. Australian National University, Canberra, ACT, Australia

    Jing Jiang

  4. Macquarie University, Sydney, NSW, Australia

    Xuyun Zhang

  5. The University of Adelaide, Adelaide, SA, Australia

    Wei Emma Zhang

  6. Open University of Cyprus, Nicosia, Cyprus

    Yannis Manolopoulos

  7. Macquarie University, Sydney, NSW, Australia

    Jia Wu

  8. University of Dubai, Dubai, United Arab Emirates

    Wathiq Mansoor

  9. Macquarie University, Sydney, NSW, Australia

    Congbo Ma

Rights and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Y.et al. (2025). Prompt Contrastive Learning Relation Extraction Method by Updating the Representation of Relation Label Words. In: Sheng, Q.Z.,et al. Advanced Data Mining and Applications. ADMA 2024. Lecture Notes in Computer Science(), vol 15391. Springer, Singapore. https://doi.org/10.1007/978-981-96-0847-8_9

Download citation

Publish with us

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 7550
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 9437
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp