Grounded dialogue models generate responses that are grounded on certain concepts. Limited by the distribution of grounded dialogue data, models trained on such data face thetransferability challenges in terms of the data distribution and the type of grounded concepts. To address the challenges, we propose thegrounded minimal editing framework, which minimally edits existing responses to be grounded on the given concept. Focusing on personas, we propose Grounded Minimal Editor (GME), which learns to edit by disentangling and recombining persona-related and persona-agnostic parts of the response. To evaluate persona-grounded minimal editing, we present the PersonaMi-nEdit dataset, and experimental results show that GME outperforms competitive baselines by a large margin. To evaluate the transferability, we experiment on the test set of BlendedSkillTalk and show that GME can edit dialogue models’ responses to largely improve their persona consistency while preserving the use of knowledge and empathy.
Chen Henry Wu, Yinhe Zheng, Xiaoxi Mao, and Minlie Huang. 2021.Transferable Persona-Grounded Dialogues via Grounded Minimal Edits. InProceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2368–2382, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
@inproceedings{wu-etal-2021-transferable, title = "Transferable Persona-Grounded Dialogues via Grounded Minimal Edits", author = "Wu, Chen Henry and Zheng, Yinhe and Mao, Xiaoxi and Huang, Minlie", editor = "Moens, Marie-Francine and Huang, Xuanjing and Specia, Lucia and Yih, Scott Wen-tau", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.183/", doi = "10.18653/v1/2021.emnlp-main.183", pages = "2368--2382", abstract = "Grounded dialogue models generate responses that are grounded on certain concepts. Limited by the distribution of grounded dialogue data, models trained on such data face the \textit{transferability} challenges in terms of the data distribution and the type of grounded concepts. To address the challenges, we propose the \textit{grounded minimal editing} framework, which minimally edits existing responses to be grounded on the given concept. Focusing on personas, we propose Grounded Minimal Editor (GME), which learns to edit by disentangling and recombining persona-related and persona-agnostic parts of the response. To evaluate persona-grounded minimal editing, we present the PersonaMi-nEdit dataset, and experimental results show that GME outperforms competitive baselines by a large margin. To evaluate the transferability, we experiment on the test set of BlendedSkillTalk and show that GME can edit dialogue models' responses to largely improve their persona consistency while preserving the use of knowledge and empathy."}
<?xml version="1.0" encoding="UTF-8"?><modsCollection xmlns="http://www.loc.gov/mods/v3"><mods ID="wu-etal-2021-transferable"> <titleInfo> <title>Transferable Persona-Grounded Dialogues via Grounded Minimal Edits</title> </titleInfo> <name type="personal"> <namePart type="given">Chen</namePart> <namePart type="given">Henry</namePart> <namePart type="family">Wu</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Yinhe</namePart> <namePart type="family">Zheng</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Xiaoxi</namePart> <namePart type="family">Mao</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Minlie</namePart> <namePart type="family">Huang</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <originInfo> <dateIssued>2021-11</dateIssued> </originInfo> <typeOfResource>text</typeOfResource> <relatedItem type="host"> <titleInfo> <title>Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing</title> </titleInfo> <name type="personal"> <namePart type="given">Marie-Francine</namePart> <namePart type="family">Moens</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Xuanjing</namePart> <namePart type="family">Huang</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Lucia</namePart> <namePart type="family">Specia</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Scott</namePart> <namePart type="given">Wen-tau</namePart> <namePart type="family">Yih</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <originInfo> <publisher>Association for Computational Linguistics</publisher> <place> <placeTerm type="text">Online and Punta Cana, Dominican Republic</placeTerm> </place> </originInfo> <genre authority="marcgt">conference publication</genre> </relatedItem> <abstract>Grounded dialogue models generate responses that are grounded on certain concepts. Limited by the distribution of grounded dialogue data, models trained on such data face the transferability challenges in terms of the data distribution and the type of grounded concepts. To address the challenges, we propose the grounded minimal editing framework, which minimally edits existing responses to be grounded on the given concept. Focusing on personas, we propose Grounded Minimal Editor (GME), which learns to edit by disentangling and recombining persona-related and persona-agnostic parts of the response. To evaluate persona-grounded minimal editing, we present the PersonaMi-nEdit dataset, and experimental results show that GME outperforms competitive baselines by a large margin. To evaluate the transferability, we experiment on the test set of BlendedSkillTalk and show that GME can edit dialogue models’ responses to largely improve their persona consistency while preserving the use of knowledge and empathy.</abstract> <identifier type="citekey">wu-etal-2021-transferable</identifier> <identifier type="doi">10.18653/v1/2021.emnlp-main.183</identifier> <location> <url>https://aclanthology.org/2021.emnlp-main.183/</url> </location> <part> <date>2021-11</date> <extent unit="page"> <start>2368</start> <end>2382</end> </extent> </part></mods></modsCollection>
%0 Conference Proceedings%T Transferable Persona-Grounded Dialogues via Grounded Minimal Edits%A Wu, Chen Henry%A Zheng, Yinhe%A Mao, Xiaoxi%A Huang, Minlie%Y Moens, Marie-Francine%Y Huang, Xuanjing%Y Specia, Lucia%Y Yih, Scott Wen-tau%S Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing%D 2021%8 November%I Association for Computational Linguistics%C Online and Punta Cana, Dominican Republic%F wu-etal-2021-transferable%X Grounded dialogue models generate responses that are grounded on certain concepts. Limited by the distribution of grounded dialogue data, models trained on such data face the transferability challenges in terms of the data distribution and the type of grounded concepts. To address the challenges, we propose the grounded minimal editing framework, which minimally edits existing responses to be grounded on the given concept. Focusing on personas, we propose Grounded Minimal Editor (GME), which learns to edit by disentangling and recombining persona-related and persona-agnostic parts of the response. To evaluate persona-grounded minimal editing, we present the PersonaMi-nEdit dataset, and experimental results show that GME outperforms competitive baselines by a large margin. To evaluate the transferability, we experiment on the test set of BlendedSkillTalk and show that GME can edit dialogue models’ responses to largely improve their persona consistency while preserving the use of knowledge and empathy.%R 10.18653/v1/2021.emnlp-main.183%U https://aclanthology.org/2021.emnlp-main.183/%U https://doi.org/10.18653/v1/2021.emnlp-main.183%P 2368-2382
Chen Henry Wu, Yinhe Zheng, Xiaoxi Mao, and Minlie Huang. 2021.Transferable Persona-Grounded Dialogues via Grounded Minimal Edits. InProceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2368–2382, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.