Language models frequently inherit societal biases from their training data. Numerous techniques have been proposed to mitigate these biases during both the pre-training and fine-tuning stages. However, fine-tuning a pre-trained debiased language model on a downstream task can reintroduce biases into the model. Additionally, existing debiasing methods for downstream tasks either (i) require labels of protected attributes (e.g., age, race, or political views) that are often not available or (ii) rely on indicators of bias, which restricts their applicability to gender debiasing since they rely on gender-specific words. To address this, we introduce a novel debiasing regularization technique based on the class-wise variance of embeddings. Crucially, our method does not require attribute labels and targets any attribute, thus addressing the shortcomings of existing debiasing methods. Our experiments on encoder language models and three datasets demonstrate that our method outperforms existing strong debiasing baselines that rely on target attribute labels while maintaining performance on the target task.
Shahed Masoudian, Markus Frohmann, Navid Rekabsaz, and Markus Schedl. 2024.Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization. InProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 10932–10938, Miami, Florida, USA. Association for Computational Linguistics.
@inproceedings{masoudian-etal-2024-unlabeled, title = "Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization", author = "Masoudian, Shahed and Frohmann, Markus and Rekabsaz, Navid and Schedl, Markus", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.612/", doi = "10.18653/v1/2024.emnlp-main.612", pages = "10932--10938", abstract = "Language models frequently inherit societal biases from their training data. Numerous techniques have been proposed to mitigate these biases during both the pre-training and fine-tuning stages. However, fine-tuning a pre-trained debiased language model on a downstream task can reintroduce biases into the model. Additionally, existing debiasing methods for downstream tasks either (i) require labels of protected attributes (e.g., age, race, or political views) that are often not available or (ii) rely on indicators of bias, which restricts their applicability to gender debiasing since they rely on gender-specific words. To address this, we introduce a novel debiasing regularization technique based on the class-wise variance of embeddings. Crucially, our method does not require attribute labels and targets any attribute, thus addressing the shortcomings of existing debiasing methods. Our experiments on encoder language models and three datasets demonstrate that our method outperforms existing strong debiasing baselines that rely on target attribute labels while maintaining performance on the target task."}
<?xml version="1.0" encoding="UTF-8"?><modsCollection xmlns="http://www.loc.gov/mods/v3"><mods ID="masoudian-etal-2024-unlabeled"> <titleInfo> <title>Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization</title> </titleInfo> <name type="personal"> <namePart type="given">Shahed</namePart> <namePart type="family">Masoudian</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Markus</namePart> <namePart type="family">Frohmann</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Navid</namePart> <namePart type="family">Rekabsaz</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Markus</namePart> <namePart type="family">Schedl</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <originInfo> <dateIssued>2024-11</dateIssued> </originInfo> <typeOfResource>text</typeOfResource> <relatedItem type="host"> <titleInfo> <title>Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing</title> </titleInfo> <name type="personal"> <namePart type="given">Yaser</namePart> <namePart type="family">Al-Onaizan</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Mohit</namePart> <namePart type="family">Bansal</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Yun-Nung</namePart> <namePart type="family">Chen</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <originInfo> <publisher>Association for Computational Linguistics</publisher> <place> <placeTerm type="text">Miami, Florida, USA</placeTerm> </place> </originInfo> <genre authority="marcgt">conference publication</genre> </relatedItem> <abstract>Language models frequently inherit societal biases from their training data. Numerous techniques have been proposed to mitigate these biases during both the pre-training and fine-tuning stages. However, fine-tuning a pre-trained debiased language model on a downstream task can reintroduce biases into the model. Additionally, existing debiasing methods for downstream tasks either (i) require labels of protected attributes (e.g., age, race, or political views) that are often not available or (ii) rely on indicators of bias, which restricts their applicability to gender debiasing since they rely on gender-specific words. To address this, we introduce a novel debiasing regularization technique based on the class-wise variance of embeddings. Crucially, our method does not require attribute labels and targets any attribute, thus addressing the shortcomings of existing debiasing methods. Our experiments on encoder language models and three datasets demonstrate that our method outperforms existing strong debiasing baselines that rely on target attribute labels while maintaining performance on the target task.</abstract> <identifier type="citekey">masoudian-etal-2024-unlabeled</identifier> <identifier type="doi">10.18653/v1/2024.emnlp-main.612</identifier> <location> <url>https://aclanthology.org/2024.emnlp-main.612/</url> </location> <part> <date>2024-11</date> <extent unit="page"> <start>10932</start> <end>10938</end> </extent> </part></mods></modsCollection>
%0 Conference Proceedings%T Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization%A Masoudian, Shahed%A Frohmann, Markus%A Rekabsaz, Navid%A Schedl, Markus%Y Al-Onaizan, Yaser%Y Bansal, Mohit%Y Chen, Yun-Nung%S Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing%D 2024%8 November%I Association for Computational Linguistics%C Miami, Florida, USA%F masoudian-etal-2024-unlabeled%X Language models frequently inherit societal biases from their training data. Numerous techniques have been proposed to mitigate these biases during both the pre-training and fine-tuning stages. However, fine-tuning a pre-trained debiased language model on a downstream task can reintroduce biases into the model. Additionally, existing debiasing methods for downstream tasks either (i) require labels of protected attributes (e.g., age, race, or political views) that are often not available or (ii) rely on indicators of bias, which restricts their applicability to gender debiasing since they rely on gender-specific words. To address this, we introduce a novel debiasing regularization technique based on the class-wise variance of embeddings. Crucially, our method does not require attribute labels and targets any attribute, thus addressing the shortcomings of existing debiasing methods. Our experiments on encoder language models and three datasets demonstrate that our method outperforms existing strong debiasing baselines that rely on target attribute labels while maintaining performance on the target task.%R 10.18653/v1/2024.emnlp-main.612%U https://aclanthology.org/2024.emnlp-main.612/%U https://doi.org/10.18653/v1/2024.emnlp-main.612%P 10932-10938
[Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization](https://aclanthology.org/2024.emnlp-main.612/) (Masoudian et al., EMNLP 2024)
Shahed Masoudian, Markus Frohmann, Navid Rekabsaz, and Markus Schedl. 2024.Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization. InProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 10932–10938, Miami, Florida, USA. Association for Computational Linguistics.