Multilingual language models have shown decent performance in multilingual and cross-lingual natural language understanding tasks. However, the power of these multilingual models in code-switching tasks has not been fully explored. In this paper, we study the effectiveness of multilingual language models to understand their capability and adaptability to the mixed-language setting by considering the inference speed, performance, and number of parameters to measure their practicality. We conduct experiments in three language pairs on named entity recognition and part-of-speech tagging and compare them with existing methods, such as using bilingual embeddings and multilingual meta-embeddings. Our findings suggest that pre-trained multilingual models do not necessarily guarantee high-quality representations on code-switching, while using meta-embeddings achieves similar results with significantly fewer parameters.
Genta Indra Winata, Samuel Cahyawijaya, Zihan Liu, Zhaojiang Lin, Andrea Madotto, and Pascale Fung. 2021.Are Multilingual Models Effective in Code-Switching?. InProceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 142–153, Online. Association for Computational Linguistics.
@inproceedings{winata-etal-2021-multilingual, title = "Are Multilingual Models Effective in Code-Switching?", author = "Winata, Genta Indra and Cahyawijaya, Samuel and Liu, Zihan and Lin, Zhaojiang and Madotto, Andrea and Fung, Pascale", editor = "Solorio, Thamar and Chen, Shuguang and Black, Alan W. and Diab, Mona and Sitaram, Sunayana and Soto, Victor and Yilmaz, Emre and Srinivasan, Anirudh", booktitle = "Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.calcs-1.20/", doi = "10.18653/v1/2021.calcs-1.20", pages = "142--153", abstract = "Multilingual language models have shown decent performance in multilingual and cross-lingual natural language understanding tasks. However, the power of these multilingual models in code-switching tasks has not been fully explored. In this paper, we study the effectiveness of multilingual language models to understand their capability and adaptability to the mixed-language setting by considering the inference speed, performance, and number of parameters to measure their practicality. We conduct experiments in three language pairs on named entity recognition and part-of-speech tagging and compare them with existing methods, such as using bilingual embeddings and multilingual meta-embeddings. Our findings suggest that pre-trained multilingual models do not necessarily guarantee high-quality representations on code-switching, while using meta-embeddings achieves similar results with significantly fewer parameters."}
%0 Conference Proceedings%T Are Multilingual Models Effective in Code-Switching?%A Winata, Genta Indra%A Cahyawijaya, Samuel%A Liu, Zihan%A Lin, Zhaojiang%A Madotto, Andrea%A Fung, Pascale%Y Solorio, Thamar%Y Chen, Shuguang%Y Black, Alan W.%Y Diab, Mona%Y Sitaram, Sunayana%Y Soto, Victor%Y Yilmaz, Emre%Y Srinivasan, Anirudh%S Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching%D 2021%8 June%I Association for Computational Linguistics%C Online%F winata-etal-2021-multilingual%X Multilingual language models have shown decent performance in multilingual and cross-lingual natural language understanding tasks. However, the power of these multilingual models in code-switching tasks has not been fully explored. In this paper, we study the effectiveness of multilingual language models to understand their capability and adaptability to the mixed-language setting by considering the inference speed, performance, and number of parameters to measure their practicality. We conduct experiments in three language pairs on named entity recognition and part-of-speech tagging and compare them with existing methods, such as using bilingual embeddings and multilingual meta-embeddings. Our findings suggest that pre-trained multilingual models do not necessarily guarantee high-quality representations on code-switching, while using meta-embeddings achieves similar results with significantly fewer parameters.%R 10.18653/v1/2021.calcs-1.20%U https://aclanthology.org/2021.calcs-1.20/%U https://doi.org/10.18653/v1/2021.calcs-1.20%P 142-153
Genta Indra Winata, Samuel Cahyawijaya, Zihan Liu, Zhaojiang Lin, Andrea Madotto, and Pascale Fung. 2021.Are Multilingual Models Effective in Code-Switching?. InProceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 142–153, Online. Association for Computational Linguistics.