A broad goal in natural language processing (NLP) is to develop a system that has the capacity to process any natural language. Most systems, however, are developed using data from just one language such as English. The SIGMORPHON 2020 shared task on morphological reinflection aims to investigate systems’ ability to generalize across typologically distinct languages, many of which are low resource. Systems were developed using data from 45 languages and just 5 language families, fine-tuned with data from an additional 45 languages and 10 language families (13 in total), and evaluated on all 90 languages. A total of 22 systems (19 neural) from 10 teams were submitted to the task. All four winning systems were neural (two monolingual transformers and two massively multilingual RNN-based models with gated attention). Most teams demonstrate utility of data hallucination and augmentation, ensembles, and multilingual training for low-resource languages. Non-neural learners and manually designed grammars showed competitive and even superior performance on some languages (such as Ingrian, Tajik, Tagalog, Zarma, Lingala), especially with very limited data. Some language families (Afro-Asiatic, Niger-Congo, Turkic) were relatively easy for most systems and achieved over 90% mean accuracy while others were more challenging.
Ekaterina Vylomova, Jennifer White, Elizabeth Salesky, Sabrina J. Mielke, Shijie Wu, Edoardo Maria Ponti, Rowan Hall Maudslay, Ran Zmigrod, Josef Valvoda, Svetlana Toldova, Francis Tyers, Elena Klyachko, Ilya Yegorov, Natalia Krizhanovsky, Paula Czarnowska, Irene Nikkarinen, Andrew Krizhanovsky, Tiago Pimentel, Lucas Torroba Hennigen, Christo Kirov, Garrett Nicolai, Adina Williams, Antonios Anastasopoulos, Hilaria Cruz, Eleanor Chodroff, Ryan Cotterell, Miikka Silfverberg, and Mans Hulden. 2020.SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection. InProceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 1–39, Online. Association for Computational Linguistics.
@inproceedings{vylomova-etal-2020-sigmorphon, title = "{SIGMORPHON} 2020 Shared Task 0: Typologically Diverse Morphological Inflection", author = "Vylomova, Ekaterina and White, Jennifer and Salesky, Elizabeth and Mielke, Sabrina J. and Wu, Shijie and Ponti, Edoardo Maria and Maudslay, Rowan Hall and Zmigrod, Ran and Valvoda, Josef and Toldova, Svetlana and Tyers, Francis and Klyachko, Elena and Yegorov, Ilya and Krizhanovsky, Natalia and Czarnowska, Paula and Nikkarinen, Irene and Krizhanovsky, Andrew and Pimentel, Tiago and Torroba Hennigen, Lucas and Kirov, Christo and Nicolai, Garrett and Williams, Adina and Anastasopoulos, Antonios and Cruz, Hilaria and Chodroff, Eleanor and Cotterell, Ryan and Silfverberg, Miikka and Hulden, Mans", editor = "Nicolai, Garrett and Gorman, Kyle and Cotterell, Ryan", booktitle = "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.sigmorphon-1.1/", doi = "10.18653/v1/2020.sigmorphon-1.1", pages = "1--39", abstract = "A broad goal in natural language processing (NLP) is to develop a system that has the capacity to process any natural language. Most systems, however, are developed using data from just one language such as English. The SIGMORPHON 2020 shared task on morphological reinflection aims to investigate systems' ability to generalize across typologically distinct languages, many of which are low resource. Systems were developed using data from 45 languages and just 5 language families, fine-tuned with data from an additional 45 languages and 10 language families (13 in total), and evaluated on all 90 languages. A total of 22 systems (19 neural) from 10 teams were submitted to the task. All four winning systems were neural (two monolingual transformers and two massively multilingual RNN-based models with gated attention). Most teams demonstrate utility of data hallucination and augmentation, ensembles, and multilingual training for low-resource languages. Non-neural learners and manually designed grammars showed competitive and even superior performance on some languages (such as Ingrian, Tajik, Tagalog, Zarma, Lingala), especially with very limited data. Some language families (Afro-Asiatic, Niger-Congo, Turkic) were relatively easy for most systems and achieved over 90{\%} mean accuracy while others were more challenging."}
%0 Conference Proceedings%T SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection%A Vylomova, Ekaterina%A White, Jennifer%A Salesky, Elizabeth%A Mielke, Sabrina J.%A Wu, Shijie%A Ponti, Edoardo Maria%A Maudslay, Rowan Hall%A Zmigrod, Ran%A Valvoda, Josef%A Toldova, Svetlana%A Tyers, Francis%A Klyachko, Elena%A Yegorov, Ilya%A Krizhanovsky, Natalia%A Czarnowska, Paula%A Nikkarinen, Irene%A Krizhanovsky, Andrew%A Pimentel, Tiago%A Torroba Hennigen, Lucas%A Kirov, Christo%A Nicolai, Garrett%A Williams, Adina%A Anastasopoulos, Antonios%A Cruz, Hilaria%A Chodroff, Eleanor%A Cotterell, Ryan%A Silfverberg, Miikka%A Hulden, Mans%Y Nicolai, Garrett%Y Gorman, Kyle%Y Cotterell, Ryan%S Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology%D 2020%8 July%I Association for Computational Linguistics%C Online%F vylomova-etal-2020-sigmorphon%X A broad goal in natural language processing (NLP) is to develop a system that has the capacity to process any natural language. Most systems, however, are developed using data from just one language such as English. The SIGMORPHON 2020 shared task on morphological reinflection aims to investigate systems’ ability to generalize across typologically distinct languages, many of which are low resource. Systems were developed using data from 45 languages and just 5 language families, fine-tuned with data from an additional 45 languages and 10 language families (13 in total), and evaluated on all 90 languages. A total of 22 systems (19 neural) from 10 teams were submitted to the task. All four winning systems were neural (two monolingual transformers and two massively multilingual RNN-based models with gated attention). Most teams demonstrate utility of data hallucination and augmentation, ensembles, and multilingual training for low-resource languages. Non-neural learners and manually designed grammars showed competitive and even superior performance on some languages (such as Ingrian, Tajik, Tagalog, Zarma, Lingala), especially with very limited data. Some language families (Afro-Asiatic, Niger-Congo, Turkic) were relatively easy for most systems and achieved over 90% mean accuracy while others were more challenging.%R 10.18653/v1/2020.sigmorphon-1.1%U https://aclanthology.org/2020.sigmorphon-1.1/%U https://doi.org/10.18653/v1/2020.sigmorphon-1.1%P 1-39
Ekaterina Vylomova, Jennifer White, Elizabeth Salesky, Sabrina J. Mielke, Shijie Wu, Edoardo Maria Ponti, Rowan Hall Maudslay, Ran Zmigrod, Josef Valvoda, Svetlana Toldova, Francis Tyers, Elena Klyachko, Ilya Yegorov, Natalia Krizhanovsky, Paula Czarnowska, Irene Nikkarinen, Andrew Krizhanovsky, Tiago Pimentel, Lucas Torroba Hennigen, Christo Kirov, Garrett Nicolai, Adina Williams, Antonios Anastasopoulos, Hilaria Cruz, Eleanor Chodroff, Ryan Cotterell, Miikka Silfverberg, and Mans Hulden. 2020.SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection. InProceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 1–39, Online. Association for Computational Linguistics.