Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the fine-tuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and augmenting existing datasets across 114 languages. In total, we contribute three key resources: we develop and open-source the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as an important framework for future research collaborations that aim to bridge gaps in resources.
Shivalika Singh, Freddie Vargus, Daniel D’souza, Börje Karlsson, Abinaya Mahendiran, Wei-Yin Ko, Herumb Shandilya, Jay Patel, Deividas Mataciunas, Laura O’Mahony, Mike Zhang, Ramith Hettiarachchi, Joseph Wilson, Marina Machado, Luisa Moura, Dominik Krzemiński, Hakimeh Fadaei, Irem Ergun, Ifeoma Okoh, Aisha Alaagib, Oshan Mudannayake, Zaid Alyafeai, Vu Chien, Sebastian Ruder, Surya Guthikonda, Emad Alghamdi, Sebastian Gehrmann, Niklas Muennighoff, Max Bartolo, Julia Kreutzer, Ahmet Üstün, Marzieh Fadaee, and Sara Hooker. 2024.Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11521–11567, Bangkok, Thailand. Association for Computational Linguistics.
@inproceedings{singh-etal-2024-aya, title = "Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning", author = {Singh, Shivalika and Vargus, Freddie and D{'}souza, Daniel and Karlsson, B{\"o}rje and Mahendiran, Abinaya and Ko, Wei-Yin and Shandilya, Herumb and Patel, Jay and Mataciunas, Deividas and O{'}Mahony, Laura and Zhang, Mike and Hettiarachchi, Ramith and Wilson, Joseph and Machado, Marina and Moura, Luisa and Krzemi{\'n}ski, Dominik and Fadaei, Hakimeh and Ergun, Irem and Okoh, Ifeoma and Alaagib, Aisha and Mudannayake, Oshan and Alyafeai, Zaid and Chien, Vu and Ruder, Sebastian and Guthikonda, Surya and Alghamdi, Emad and Gehrmann, Sebastian and Muennighoff, Niklas and Bartolo, Max and Kreutzer, Julia and {\"U}st{\"u}n, Ahmet and Fadaee, Marzieh and Hooker, Sara}, editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.620/", doi = "10.18653/v1/2024.acl-long.620", pages = "11521--11567", abstract = "Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the fine-tuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and augmenting existing datasets across 114 languages. In total, we contribute three key resources: we develop and open-source the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as an important framework for future research collaborations that aim to bridge gaps in resources."}
%0 Conference Proceedings%T Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning%A Singh, Shivalika%A Vargus, Freddie%A D’souza, Daniel%A Karlsson, Börje%A Mahendiran, Abinaya%A Ko, Wei-Yin%A Shandilya, Herumb%A Patel, Jay%A Mataciunas, Deividas%A O’Mahony, Laura%A Zhang, Mike%A Hettiarachchi, Ramith%A Wilson, Joseph%A Machado, Marina%A Moura, Luisa%A Krzemiński, Dominik%A Fadaei, Hakimeh%A Ergun, Irem%A Okoh, Ifeoma%A Alaagib, Aisha%A Mudannayake, Oshan%A Alyafeai, Zaid%A Chien, Vu%A Ruder, Sebastian%A Guthikonda, Surya%A Alghamdi, Emad%A Gehrmann, Sebastian%A Muennighoff, Niklas%A Bartolo, Max%A Kreutzer, Julia%A Üstün, Ahmet%A Fadaee, Marzieh%A Hooker, Sara%Y Ku, Lun-Wei%Y Martins, Andre%Y Srikumar, Vivek%S Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)%D 2024%8 August%I Association for Computational Linguistics%C Bangkok, Thailand%F singh-etal-2024-aya%X Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the fine-tuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and augmenting existing datasets across 114 languages. In total, we contribute three key resources: we develop and open-source the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as an important framework for future research collaborations that aim to bridge gaps in resources.%R 10.18653/v1/2024.acl-long.620%U https://aclanthology.org/2024.acl-long.620/%U https://doi.org/10.18653/v1/2024.acl-long.620%P 11521-11567
Shivalika Singh, Freddie Vargus, Daniel D’souza, Börje Karlsson, Abinaya Mahendiran, Wei-Yin Ko, Herumb Shandilya, Jay Patel, Deividas Mataciunas, Laura O’Mahony, Mike Zhang, Ramith Hettiarachchi, Joseph Wilson, Marina Machado, Luisa Moura, Dominik Krzemiński, Hakimeh Fadaei, Irem Ergun, Ifeoma Okoh, Aisha Alaagib, Oshan Mudannayake, Zaid Alyafeai, Vu Chien, Sebastian Ruder, Surya Guthikonda, Emad Alghamdi, Sebastian Gehrmann, Niklas Muennighoff, Max Bartolo, Julia Kreutzer, Ahmet Üstün, Marzieh Fadaee, and Sara Hooker. 2024.Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11521–11567, Bangkok, Thailand. Association for Computational Linguistics.