Recent work has shown that deeper character-based neural machine translation (NMT) models can outperform subword-based models. However, it is still unclear what makes deeper character-based models successful. In this paper, we conduct an investigation into pure character-based models in the case of translating Finnish into English, including exploring the ability to learn word senses and morphological inflections and the attention mechanism. We demonstrate that word-level information is distributed over the entire character sequence rather than over a single character, and characters at different positions play different roles in learning linguistic knowledge. In addition, character-based models need more layers to encode word senses which explains why only deeper models outperform subword-based models. The attention distribution pattern shows that separators attract a lot of attention and we explore a sparse word-level attention to enforce character hidden states to capture the full word-level information. Experimental results show that the word-level attention with a single head results in 1.2 BLEU points drop.
@inproceedings{tang-etal-2020-understanding, title = "Understanding Pure Character-Based Neural Machine Translation: The Case of Translating {F}innish into {E}nglish", author = "Tang, Gongbo and Sennrich, Rico and Nivre, Joakim", editor = "Scott, Donia and Bel, Nuria and Zong, Chengqing", booktitle = "Proceedings of the 28th International Conference on Computational Linguistics", month = dec, year = "2020", address = "Barcelona, Spain (Online)", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2020.coling-main.375/", doi = "10.18653/v1/2020.coling-main.375", pages = "4251--4262", abstract = "Recent work has shown that deeper character-based neural machine translation (NMT) models can outperform subword-based models. However, it is still unclear what makes deeper character-based models successful. In this paper, we conduct an investigation into pure character-based models in the case of translating Finnish into English, including exploring the ability to learn word senses and morphological inflections and the attention mechanism. We demonstrate that word-level information is distributed over the entire character sequence rather than over a single character, and characters at different positions play different roles in learning linguistic knowledge. In addition, character-based models need more layers to encode word senses which explains why only deeper models outperform subword-based models. The attention distribution pattern shows that separators attract a lot of attention and we explore a sparse word-level attention to enforce character hidden states to capture the full word-level information. Experimental results show that the word-level attention with a single head results in 1.2 BLEU points drop."}
<?xml version="1.0" encoding="UTF-8"?><modsCollection xmlns="http://www.loc.gov/mods/v3"><mods ID="tang-etal-2020-understanding"> <titleInfo> <title>Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English</title> </titleInfo> <name type="personal"> <namePart type="given">Gongbo</namePart> <namePart type="family">Tang</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Rico</namePart> <namePart type="family">Sennrich</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Joakim</namePart> <namePart type="family">Nivre</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <originInfo> <dateIssued>2020-12</dateIssued> </originInfo> <typeOfResource>text</typeOfResource> <relatedItem type="host"> <titleInfo> <title>Proceedings of the 28th International Conference on Computational Linguistics</title> </titleInfo> <name type="personal"> <namePart type="given">Donia</namePart> <namePart type="family">Scott</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Nuria</namePart> <namePart type="family">Bel</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Chengqing</namePart> <namePart type="family">Zong</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <originInfo> <publisher>International Committee on Computational Linguistics</publisher> <place> <placeTerm type="text">Barcelona, Spain (Online)</placeTerm> </place> </originInfo> <genre authority="marcgt">conference publication</genre> </relatedItem> <abstract>Recent work has shown that deeper character-based neural machine translation (NMT) models can outperform subword-based models. However, it is still unclear what makes deeper character-based models successful. In this paper, we conduct an investigation into pure character-based models in the case of translating Finnish into English, including exploring the ability to learn word senses and morphological inflections and the attention mechanism. We demonstrate that word-level information is distributed over the entire character sequence rather than over a single character, and characters at different positions play different roles in learning linguistic knowledge. In addition, character-based models need more layers to encode word senses which explains why only deeper models outperform subword-based models. The attention distribution pattern shows that separators attract a lot of attention and we explore a sparse word-level attention to enforce character hidden states to capture the full word-level information. Experimental results show that the word-level attention with a single head results in 1.2 BLEU points drop.</abstract> <identifier type="citekey">tang-etal-2020-understanding</identifier> <identifier type="doi">10.18653/v1/2020.coling-main.375</identifier> <location> <url>https://aclanthology.org/2020.coling-main.375/</url> </location> <part> <date>2020-12</date> <extent unit="page"> <start>4251</start> <end>4262</end> </extent> </part></mods></modsCollection>
%0 Conference Proceedings%T Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English%A Tang, Gongbo%A Sennrich, Rico%A Nivre, Joakim%Y Scott, Donia%Y Bel, Nuria%Y Zong, Chengqing%S Proceedings of the 28th International Conference on Computational Linguistics%D 2020%8 December%I International Committee on Computational Linguistics%C Barcelona, Spain (Online)%F tang-etal-2020-understanding%X Recent work has shown that deeper character-based neural machine translation (NMT) models can outperform subword-based models. However, it is still unclear what makes deeper character-based models successful. In this paper, we conduct an investigation into pure character-based models in the case of translating Finnish into English, including exploring the ability to learn word senses and morphological inflections and the attention mechanism. We demonstrate that word-level information is distributed over the entire character sequence rather than over a single character, and characters at different positions play different roles in learning linguistic knowledge. In addition, character-based models need more layers to encode word senses which explains why only deeper models outperform subword-based models. The attention distribution pattern shows that separators attract a lot of attention and we explore a sparse word-level attention to enforce character hidden states to capture the full word-level information. Experimental results show that the word-level attention with a single head results in 1.2 BLEU points drop.%R 10.18653/v1/2020.coling-main.375%U https://aclanthology.org/2020.coling-main.375/%U https://doi.org/10.18653/v1/2020.coling-main.375%P 4251-4262
[Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English](https://aclanthology.org/2020.coling-main.375/) (Tang et al., COLING 2020)