Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models(PLMs) on few-shot Natural Language Understanding (NLU) tasks by employing task-specific prompts. Yet, PLMsare unfamiliar with prompt-style expressionsduring pre-training, which limits the few-shotlearning performance on downstream tasks. It would be desirable if the models can stimulate prompting knowledge while adaptation to specific NLU tasks. We present the Adversarial Knowledge Stimulated Contrastive Prompting (AKSCP) framework, leading to better few-shot NLU tasks for language models by implicitly stimulate knowledge from pretrained language model. In AKSCP, a novel paradigm Cloze-driven prompt is proposed for joint prompt tuning across word cloze task and prompt-based learning, forcing PLMs to stimulate prompting knowledge. We further design an Adversarial Contrastive learning method to improve the generalization ability of PLM for different downstream tasks. Experiments over a variety of NLU tasks show that AKSCP consistently outperforms state-of-the-arts for prompt-based fine-tuning.
Kai Zheng, Qingfeng Sun, Yaming Yang, Tengchao Lv, Yeyong Pi, Changlin Zhao, Fei Xu, and Qi Zhang. 2023.Adversarial Knowledge Stimulated Contrastive Prompting for Few-shot Language Learners. InFindings of the Association for Computational Linguistics: ACL 2023, pages 13495–13507, Toronto, Canada. Association for Computational Linguistics.
@inproceedings{zheng-etal-2023-adversarial, title = "Adversarial Knowledge Stimulated Contrastive Prompting for Few-shot Language Learners", author = "Zheng, Kai and Sun, Qingfeng and Yang, Yaming and Lv, Tengchao and Pi, Yeyong and Zhao, Changlin and Xu, Fei and Zhang, Qi", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.852/", doi = "10.18653/v1/2023.findings-acl.852", pages = "13495--13507", abstract = "Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models(PLMs) on few-shot Natural Language Understanding (NLU) tasks by employing task-specific prompts. Yet, PLMsare unfamiliar with prompt-style expressionsduring pre-training, which limits the few-shotlearning performance on downstream tasks. It would be desirable if the models can stimulate prompting knowledge while adaptation to specific NLU tasks. We present the Adversarial Knowledge Stimulated Contrastive Prompting (AKSCP) framework, leading to better few-shot NLU tasks for language models by implicitly stimulate knowledge from pretrained language model. In AKSCP, a novel paradigm Cloze-driven prompt is proposed for joint prompt tuning across word cloze task and prompt-based learning, forcing PLMs to stimulate prompting knowledge. We further design an Adversarial Contrastive learning method to improve the generalization ability of PLM for different downstream tasks. Experiments over a variety of NLU tasks show that AKSCP consistently outperforms state-of-the-arts for prompt-based fine-tuning."}
%0 Conference Proceedings%T Adversarial Knowledge Stimulated Contrastive Prompting for Few-shot Language Learners%A Zheng, Kai%A Sun, Qingfeng%A Yang, Yaming%A Lv, Tengchao%A Pi, Yeyong%A Zhao, Changlin%A Xu, Fei%A Zhang, Qi%Y Rogers, Anna%Y Boyd-Graber, Jordan%Y Okazaki, Naoaki%S Findings of the Association for Computational Linguistics: ACL 2023%D 2023%8 July%I Association for Computational Linguistics%C Toronto, Canada%F zheng-etal-2023-adversarial%X Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models(PLMs) on few-shot Natural Language Understanding (NLU) tasks by employing task-specific prompts. Yet, PLMsare unfamiliar with prompt-style expressionsduring pre-training, which limits the few-shotlearning performance on downstream tasks. It would be desirable if the models can stimulate prompting knowledge while adaptation to specific NLU tasks. We present the Adversarial Knowledge Stimulated Contrastive Prompting (AKSCP) framework, leading to better few-shot NLU tasks for language models by implicitly stimulate knowledge from pretrained language model. In AKSCP, a novel paradigm Cloze-driven prompt is proposed for joint prompt tuning across word cloze task and prompt-based learning, forcing PLMs to stimulate prompting knowledge. We further design an Adversarial Contrastive learning method to improve the generalization ability of PLM for different downstream tasks. Experiments over a variety of NLU tasks show that AKSCP consistently outperforms state-of-the-arts for prompt-based fine-tuning.%R 10.18653/v1/2023.findings-acl.852%U https://aclanthology.org/2023.findings-acl.852/%U https://doi.org/10.18653/v1/2023.findings-acl.852%P 13495-13507
[Adversarial Knowledge Stimulated Contrastive Prompting for Few-shot Language Learners](https://aclanthology.org/2023.findings-acl.852/) (Zheng et al., Findings 2023)
Kai Zheng, Qingfeng Sun, Yaming Yang, Tengchao Lv, Yeyong Pi, Changlin Zhao, Fei Xu, and Qi Zhang. 2023.Adversarial Knowledge Stimulated Contrastive Prompting for Few-shot Language Learners. InFindings of the Association for Computational Linguistics: ACL 2023, pages 13495–13507, Toronto, Canada. Association for Computational Linguistics.