This paper presents our contribution to the Offensive Language Classification Task (English SubTask A) of Semeval 2020. We propose different Bert models trained on several offensive language classification and profanity datasets, and combine their output predictions in an ensemble model. We experimented with different ensemble approaches, such as SVMs, Gradient boosting, AdaBoosting and Logistic Regression. We further propose an under-sampling approach of the current SOLID dataset, which removed the most uncertain partitions of the dataset, increasing the recall of the dataset. Our best model, an average ensemble of four different Bert models, achieved 11th place out of 82 participants with a macro F1 score of 0.91344 in the English SubTask A.
@inproceedings{wang-marinho-2020-nova, title = "Nova-Wang at {S}em{E}val-2020 Task 12: {O}ffens{E}mblert: An Ensemble of{O}ffensive Language Classifiers", author = "Wang, Susan and Marinho, Zita", editor = "Herbelot, Aurelie and Zhu, Xiaodan and Palmer, Alexis and Schneider, Nathan and May, Jonathan and Shutova, Ekaterina", booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation", month = dec, year = "2020", address = "Barcelona (online)", publisher = "International Committee for Computational Linguistics", url = "https://aclanthology.org/2020.semeval-1.207/", doi = "10.18653/v1/2020.semeval-1.207", pages = "1587--1597", abstract = "This paper presents our contribution to the Offensive Language Classification Task (English SubTask A) of Semeval 2020. We propose different Bert models trained on several offensive language classification and profanity datasets, and combine their output predictions in an ensemble model. We experimented with different ensemble approaches, such as SVMs, Gradient boosting, AdaBoosting and Logistic Regression. We further propose an under-sampling approach of the current SOLID dataset, which removed the most uncertain partitions of the dataset, increasing the recall of the dataset. Our best model, an average ensemble of four different Bert models, achieved 11th place out of 82 participants with a macro F1 score of 0.91344 in the English SubTask A."}
<?xml version="1.0" encoding="UTF-8"?><modsCollection xmlns="http://www.loc.gov/mods/v3"><mods ID="wang-marinho-2020-nova"> <titleInfo> <title>Nova-Wang at SemEval-2020 Task 12: OffensEmblert: An Ensemble ofOffensive Language Classifiers</title> </titleInfo> <name type="personal"> <namePart type="given">Susan</namePart> <namePart type="family">Wang</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Zita</namePart> <namePart type="family">Marinho</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <originInfo> <dateIssued>2020-12</dateIssued> </originInfo> <typeOfResource>text</typeOfResource> <relatedItem type="host"> <titleInfo> <title>Proceedings of the Fourteenth Workshop on Semantic Evaluation</title> </titleInfo> <name type="personal"> <namePart type="given">Aurelie</namePart> <namePart type="family">Herbelot</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Xiaodan</namePart> <namePart type="family">Zhu</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Alexis</namePart> <namePart type="family">Palmer</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Nathan</namePart> <namePart type="family">Schneider</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Jonathan</namePart> <namePart type="family">May</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Ekaterina</namePart> <namePart type="family">Shutova</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <originInfo> <publisher>International Committee for Computational Linguistics</publisher> <place> <placeTerm type="text">Barcelona (online)</placeTerm> </place> </originInfo> <genre authority="marcgt">conference publication</genre> </relatedItem> <abstract>This paper presents our contribution to the Offensive Language Classification Task (English SubTask A) of Semeval 2020. We propose different Bert models trained on several offensive language classification and profanity datasets, and combine their output predictions in an ensemble model. We experimented with different ensemble approaches, such as SVMs, Gradient boosting, AdaBoosting and Logistic Regression. We further propose an under-sampling approach of the current SOLID dataset, which removed the most uncertain partitions of the dataset, increasing the recall of the dataset. Our best model, an average ensemble of four different Bert models, achieved 11th place out of 82 participants with a macro F1 score of 0.91344 in the English SubTask A.</abstract> <identifier type="citekey">wang-marinho-2020-nova</identifier> <identifier type="doi">10.18653/v1/2020.semeval-1.207</identifier> <location> <url>https://aclanthology.org/2020.semeval-1.207/</url> </location> <part> <date>2020-12</date> <extent unit="page"> <start>1587</start> <end>1597</end> </extent> </part></mods></modsCollection>
%0 Conference Proceedings%T Nova-Wang at SemEval-2020 Task 12: OffensEmblert: An Ensemble ofOffensive Language Classifiers%A Wang, Susan%A Marinho, Zita%Y Herbelot, Aurelie%Y Zhu, Xiaodan%Y Palmer, Alexis%Y Schneider, Nathan%Y May, Jonathan%Y Shutova, Ekaterina%S Proceedings of the Fourteenth Workshop on Semantic Evaluation%D 2020%8 December%I International Committee for Computational Linguistics%C Barcelona (online)%F wang-marinho-2020-nova%X This paper presents our contribution to the Offensive Language Classification Task (English SubTask A) of Semeval 2020. We propose different Bert models trained on several offensive language classification and profanity datasets, and combine their output predictions in an ensemble model. We experimented with different ensemble approaches, such as SVMs, Gradient boosting, AdaBoosting and Logistic Regression. We further propose an under-sampling approach of the current SOLID dataset, which removed the most uncertain partitions of the dataset, increasing the recall of the dataset. Our best model, an average ensemble of four different Bert models, achieved 11th place out of 82 participants with a macro F1 score of 0.91344 in the English SubTask A.%R 10.18653/v1/2020.semeval-1.207%U https://aclanthology.org/2020.semeval-1.207/%U https://doi.org/10.18653/v1/2020.semeval-1.207%P 1587-1597