- Yuanru Wang16,
- Yahui Zhao ORCID:orcid.org/0009-0008-6410-262716,
- Guozhe Jin ORCID:orcid.org/0000-0002-1835-161316,
- Zhenguo Zhang16,
- Fei Yin17,
- Rongyi Cui ORCID:orcid.org/0000-0003-2968-892116 &
- …
- Man Li18
Part of the book series:Lecture Notes in Computer Science ((LNAI,volume 15391))
Included in the following conference series:
227Accesses
Abstract
Relation extraction is a core task in natural language processing, focusing on predicting the relation labels between given entities in a text. However, existing relation extraction models face several challenges, including insufficient logical reasoning, inadequate semantic information in relation labels, and being prone to misclassification. To address these issues, we propose theUpdating Relation Label Word RepresentationsPromptContrastiveLearning(UPCL) Framework. The framework (1) designs a novel template that provides explicit reasoning steps and can improve the ability of the model to perform complex reasoning. Within this framework, (2) the representation of relation label word is updated by using sentence information in the training set, (3) furthermore the representation is trained using a contrastive learning strategy. Experimental results show that our model has demonstrated improved performance on three relation extraction datasets, proving the effectiveness of our model. To verify the model’s generalization capability, we also design multiple experiments for different scenarios, and experiments demonstrate that UPCL significantly outperforms baselines in various datasets.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 7550
- Price includes VAT (Japan)
- Softcover Book
- JPY 9437
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022)
Cohen, A.D., Rosenman, S., Goldberg, Y.: Relation classification as two-way span-prediction. arXiv preprintarXiv:2010.04829 (2020)
Gao, T., Yao, X., Chen, D.: Simcse: Simple contrastive learning of sentence embeddings. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 6894–6910 (2021)
Han, X., Zhao, W., Ding, N., Liu, Z., Sun, M.: Ptr: Prompt tuning with rules for text classification. AI Open3, 182–192 (2022)
Hendrickx, I., Kim, S.N., Kozareva, Z., Nakov, P., Séaghdha, D.Ó., Padó, S., Pennacchiotti, M., Romano, L., Szpakowicz, S.: Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In: Proceedings of the 5th International Workshop on Semantic Evaluation, pp. 33–38 (2010)
Hu, S., Ding, N., Wang, H., Liu, Z., Wang, J., Li, J., Wu, W., Sun, M.: Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 2225–2240 (2022)
Joshi, M., Chen, D., Liu, Y., Weld, D.S., Zettlemoyer, L., Levy, O.: Spanbert: Improving pre-training by representing and predicting spans. Transa. Assoc. Comput. Llinguist.8, 64–77 (2020)
Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Adv. Neural. Inf. Process. Syst.33, 18661–18673 (2020)
Kojima, T., Gu, S.S., Reid, M., Matsuo, Y., Iwasawa, Y.: Large language models are zero-shot reasoners. Adv. Neural. Inf. Process. Syst.35, 22199–22213 (2022)
Li, Y., Xu, C., Long, G., Shen, T., Tao, C., Jiang, J.: CCPrefix: Counterfactual contrastive prefix-tuning for many-class classification. In: Graham, Y., Purver, M. (eds.) Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2977–2988. Association for Computational Linguistics, St. Julian’s, Malta (2024),https://aclanthology.org/2024.eacl-long.181
Liang, X., Wu, S., Li, M., Li, Z.: Modeling multi-granularity hierarchical features for relation extraction. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5088–5098 (2022)
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv.55(9), 1–35 (2023)
Liu, S., Hu, X., Zhang, C., Wen, L., Philip, S.Y., et al.: Hiure: Hierarchical exemplar contrastive learning for unsupervised relation extraction. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5970–5980 (2022)
Liu, Y., Hu, J., Wan, X., Chang, T.H.: A simple yet effective relation information guided approach for few-shot relation extraction. In: Findings of the Association for Computational Linguistics: ACL 2022, pp. 757–763 (2022)
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprintarXiv:1907.11692 (2019)
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 8026–8037 (2019)
Peters, M.E., Neumann, M., Logan, R., Schwartz, R., Joshi, V., Singh, S., Smith, N.A.: Knowledge enhanced contextual word representations. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 43–54 (2019)
Soares, L.B., Fitzgerald, N., Ling, J., Kwiatkowski, T.: Matching the blanks: Distributional similarity for relation learning. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2895–2905 (2019)
Stoica, G., Platanios, E.A., Póczos, B.: Re-tacred: Addressing shortcomings of the tacred dataset. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 35, pp. 13843–13850 (2021)
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q.V., Zhou, D., et al.: Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural. Inf. Process. Syst.35, 24824–24837 (2022)
Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., et al.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pp. 38–45 (2020)
Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3733–3742 (2018),https://api.semanticscholar.org/CorpusID:4591284
Xue, F., Sun, A., Zhang, H., Chng, E.S.: Gdpnet: Refining latent multi-view graph for relation extraction. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 35, pp. 14194–14202 (2021)
Yamada, I., Asai, A., Shindo, H., Takeda, H., Matsumoto, Y.: Luke: Deep contextualized entity representations with entity-aware self-attention. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6442–6454 (2020)
Zhang, Y., Zhong, V., Chen, D., Angeli, G., Manning, C.D.: Position-aware attention and supervised data improve slot filling. In: Conference on Empirical Methods in Natural Language Processing, pp. 35–45 (2017)
Zhao, J., Zhan, W., Zhao, X., Zhang, Q., Gui, T., Wei, Z., Wang, J., Peng, M., Sun, M.: Re-matching: A fine-grained semantic matching method for zero-shot relation extraction. arXiv preprintarXiv:2306.04954 (2023)
Zhao, K., Xu, H., Yang, J., Gao, K.: Consistent representation learning for continual relation extraction. In: Findings of the Association for Computational Linguistics: ACL 2022, pp. 3402–3411 (2022)
Acknowledgments
This work is supported by Science and Technology Development Plan Project of Jilin Province [20220203127SF], the National Natural Science Foundation of China [grant number 62162062], The school-enterprise cooperation project of Yanbian University [2024-10].
Author information
Authors and Affiliations
Intelligent Information Processing Lab, Department of Computer Science & Technology, Yanbian University, Yanji, 133002, China
Yuanru Wang, Yahui Zhao, Guozhe Jin, Zhenguo Zhang & Rongyi Cui
Department of Spine Surgery, China-Japan Union Hospital of Jilin University, Changchun, 130033, China
Fei Yin
School of Information Technology, Deakin University, Geelong, Australia
Man Li
- Yuanru Wang
You can also search for this author inPubMed Google Scholar
- Yahui Zhao
You can also search for this author inPubMed Google Scholar
- Guozhe Jin
You can also search for this author inPubMed Google Scholar
- Zhenguo Zhang
You can also search for this author inPubMed Google Scholar
- Fei Yin
You can also search for this author inPubMed Google Scholar
- Rongyi Cui
You can also search for this author inPubMed Google Scholar
- Man Li
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toYahui Zhao.
Editor information
Editors and Affiliations
Macquarie University, Sydney, NSW, Australia
Quan Z. Sheng
University of Auckland, Auckland, New Zealand
Gill Dobbie
Australian National University, Canberra, ACT, Australia
Jing Jiang
Macquarie University, Sydney, NSW, Australia
Xuyun Zhang
The University of Adelaide, Adelaide, SA, Australia
Wei Emma Zhang
Open University of Cyprus, Nicosia, Cyprus
Yannis Manolopoulos
Macquarie University, Sydney, NSW, Australia
Jia Wu
University of Dubai, Dubai, United Arab Emirates
Wathiq Mansoor
Macquarie University, Sydney, NSW, Australia
Congbo Ma
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wang, Y.et al. (2025). Prompt Contrastive Learning Relation Extraction Method by Updating the Representation of Relation Label Words. In: Sheng, Q.Z.,et al. Advanced Data Mining and Applications. ADMA 2024. Lecture Notes in Computer Science(), vol 15391. Springer, Singapore. https://doi.org/10.1007/978-981-96-0847-8_9
Download citation
Published:
Publisher Name:Springer, Singapore
Print ISBN:978-981-96-0846-1
Online ISBN:978-981-96-0847-8
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative