Framing bias plays a significant role in exacerbating political polarization by distorting the perception of actual events. Media outlets with divergent political stances often use polarized language in their reporting of the same event. We propose a new loss function that encourages the model to minimize the polarity difference between the polarized input articles to reduce framing bias. Specifically, our loss is designed to jointly optimize the model to map polarity ends bidirectionally. Our experimental results demonstrate that incorporating the proposed polarity minimization loss leads to a substantial reduction in framing bias when compared to a BART-based multi-document summarization model. Notably, we find that the effectiveness of this approach is most pronounced when the model is trained to minimize the polarity loss associated with informational framing bias (i.e., skewed selection of information to report).
Yejin Bang, Nayeon Lee, and Pascale Fung. 2023.Mitigating Framing Bias with Polarity Minimization Loss. InFindings of the Association for Computational Linguistics: EMNLP 2023, pages 11100–11110, Singapore. Association for Computational Linguistics.
@inproceedings{bang-etal-2023-mitigating, title = "Mitigating Framing Bias with Polarity Minimization Loss", author = "Bang, Yejin and Lee, Nayeon and Fung, Pascale", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.742/", doi = "10.18653/v1/2023.findings-emnlp.742", pages = "11100--11110", abstract = "Framing bias plays a significant role in exacerbating political polarization by distorting the perception of actual events. Media outlets with divergent political stances often use polarized language in their reporting of the same event. We propose a new loss function that encourages the model to minimize the polarity difference between the polarized input articles to reduce framing bias. Specifically, our loss is designed to jointly optimize the model to map polarity ends bidirectionally. Our experimental results demonstrate that incorporating the proposed polarity minimization loss leads to a substantial reduction in framing bias when compared to a BART-based multi-document summarization model. Notably, we find that the effectiveness of this approach is most pronounced when the model is trained to minimize the polarity loss associated with informational framing bias (i.e., skewed selection of information to report)."}
<?xml version="1.0" encoding="UTF-8"?><modsCollection xmlns="http://www.loc.gov/mods/v3"><mods ID="bang-etal-2023-mitigating"> <titleInfo> <title>Mitigating Framing Bias with Polarity Minimization Loss</title> </titleInfo> <name type="personal"> <namePart type="given">Yejin</namePart> <namePart type="family">Bang</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Nayeon</namePart> <namePart type="family">Lee</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Pascale</namePart> <namePart type="family">Fung</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <originInfo> <dateIssued>2023-12</dateIssued> </originInfo> <typeOfResource>text</typeOfResource> <relatedItem type="host"> <titleInfo> <title>Findings of the Association for Computational Linguistics: EMNLP 2023</title> </titleInfo> <name type="personal"> <namePart type="given">Houda</namePart> <namePart type="family">Bouamor</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Juan</namePart> <namePart type="family">Pino</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Kalika</namePart> <namePart type="family">Bali</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <originInfo> <publisher>Association for Computational Linguistics</publisher> <place> <placeTerm type="text">Singapore</placeTerm> </place> </originInfo> <genre authority="marcgt">conference publication</genre> </relatedItem> <abstract>Framing bias plays a significant role in exacerbating political polarization by distorting the perception of actual events. Media outlets with divergent political stances often use polarized language in their reporting of the same event. We propose a new loss function that encourages the model to minimize the polarity difference between the polarized input articles to reduce framing bias. Specifically, our loss is designed to jointly optimize the model to map polarity ends bidirectionally. Our experimental results demonstrate that incorporating the proposed polarity minimization loss leads to a substantial reduction in framing bias when compared to a BART-based multi-document summarization model. Notably, we find that the effectiveness of this approach is most pronounced when the model is trained to minimize the polarity loss associated with informational framing bias (i.e., skewed selection of information to report).</abstract> <identifier type="citekey">bang-etal-2023-mitigating</identifier> <identifier type="doi">10.18653/v1/2023.findings-emnlp.742</identifier> <location> <url>https://aclanthology.org/2023.findings-emnlp.742/</url> </location> <part> <date>2023-12</date> <extent unit="page"> <start>11100</start> <end>11110</end> </extent> </part></mods></modsCollection>
%0 Conference Proceedings%T Mitigating Framing Bias with Polarity Minimization Loss%A Bang, Yejin%A Lee, Nayeon%A Fung, Pascale%Y Bouamor, Houda%Y Pino, Juan%Y Bali, Kalika%S Findings of the Association for Computational Linguistics: EMNLP 2023%D 2023%8 December%I Association for Computational Linguistics%C Singapore%F bang-etal-2023-mitigating%X Framing bias plays a significant role in exacerbating political polarization by distorting the perception of actual events. Media outlets with divergent political stances often use polarized language in their reporting of the same event. We propose a new loss function that encourages the model to minimize the polarity difference between the polarized input articles to reduce framing bias. Specifically, our loss is designed to jointly optimize the model to map polarity ends bidirectionally. Our experimental results demonstrate that incorporating the proposed polarity minimization loss leads to a substantial reduction in framing bias when compared to a BART-based multi-document summarization model. Notably, we find that the effectiveness of this approach is most pronounced when the model is trained to minimize the polarity loss associated with informational framing bias (i.e., skewed selection of information to report).%R 10.18653/v1/2023.findings-emnlp.742%U https://aclanthology.org/2023.findings-emnlp.742/%U https://doi.org/10.18653/v1/2023.findings-emnlp.742%P 11100-11110
Yejin Bang, Nayeon Lee, and Pascale Fung. 2023.Mitigating Framing Bias with Polarity Minimization Loss. InFindings of the Association for Computational Linguistics: EMNLP 2023, pages 11100–11110, Singapore. Association for Computational Linguistics.