The adoption of Transformer-based models in natural language processing (NLP) has led to great success using a massive number of parameters. However, due to deployment constraints in edge devices, there has been a rising interest in the compression of these models to improve their inference time and memory footprint. This paper presents a novel loss objective to compress token embeddings in the Transformer-based models by leveraging an AutoEncoder architecture. More specifically, we emphasize the importance of the direction of compressed embeddings with respect to original uncompressed embeddings. The proposed method is task-agnostic and does not require further language modeling pre-training. Our method significantly outperforms the commonly used SVD-based matrix-factorization approach in terms of initial language model Perplexity. Moreover, we evaluate our proposed approach over SQuAD v1.1 dataset and several downstream tasks from the GLUE benchmark, where we also outperform the baseline in most scenarios. Our code is public.
@inproceedings{balazy-etal-2021-direction, title = "Direction is what you need: Improving Word Embedding Compression in Large Language Models", author = "Ba{\l}azy, Klaudia and Banaei, Mohammadreza and Lebret, R{\'e}mi and Tabor, Jacek and Aberer, Karl", editor = "Rogers, Anna and Calixto, Iacer and Vuli{\'c}, Ivan and Saphra, Naomi and Kassner, Nora and Camburu, Oana-Maria and Bansal, Trapit and Shwartz, Vered", booktitle = "Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.repl4nlp-1.32/", doi = "10.18653/v1/2021.repl4nlp-1.32", pages = "322--330", abstract = "The adoption of Transformer-based models in natural language processing (NLP) has led to great success using a massive number of parameters. However, due to deployment constraints in edge devices, there has been a rising interest in the compression of these models to improve their inference time and memory footprint. This paper presents a novel loss objective to compress token embeddings in the Transformer-based models by leveraging an AutoEncoder architecture. More specifically, we emphasize the importance of the direction of compressed embeddings with respect to original uncompressed embeddings. The proposed method is task-agnostic and does not require further language modeling pre-training. Our method significantly outperforms the commonly used SVD-based matrix-factorization approach in terms of initial language model Perplexity. Moreover, we evaluate our proposed approach over SQuAD v1.1 dataset and several downstream tasks from the GLUE benchmark, where we also outperform the baseline in most scenarios. Our code is public."}
<?xml version="1.0" encoding="UTF-8"?><modsCollection xmlns="http://www.loc.gov/mods/v3"><mods ID="balazy-etal-2021-direction"> <titleInfo> <title>Direction is what you need: Improving Word Embedding Compression in Large Language Models</title> </titleInfo> <name type="personal"> <namePart type="given">Klaudia</namePart> <namePart type="family">Bałazy</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Mohammadreza</namePart> <namePart type="family">Banaei</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Rémi</namePart> <namePart type="family">Lebret</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Jacek</namePart> <namePart type="family">Tabor</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Karl</namePart> <namePart type="family">Aberer</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <originInfo> <dateIssued>2021-08</dateIssued> </originInfo> <typeOfResource>text</typeOfResource> <relatedItem type="host"> <titleInfo> <title>Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)</title> </titleInfo> <name type="personal"> <namePart type="given">Anna</namePart> <namePart type="family">Rogers</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Iacer</namePart> <namePart type="family">Calixto</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Ivan</namePart> <namePart type="family">Vulić</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Naomi</namePart> <namePart type="family">Saphra</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Nora</namePart> <namePart type="family">Kassner</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Oana-Maria</namePart> <namePart type="family">Camburu</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Trapit</namePart> <namePart type="family">Bansal</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Vered</namePart> <namePart type="family">Shwartz</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <originInfo> <publisher>Association for Computational Linguistics</publisher> <place> <placeTerm type="text">Online</placeTerm> </place> </originInfo> <genre authority="marcgt">conference publication</genre> </relatedItem> <abstract>The adoption of Transformer-based models in natural language processing (NLP) has led to great success using a massive number of parameters. However, due to deployment constraints in edge devices, there has been a rising interest in the compression of these models to improve their inference time and memory footprint. This paper presents a novel loss objective to compress token embeddings in the Transformer-based models by leveraging an AutoEncoder architecture. More specifically, we emphasize the importance of the direction of compressed embeddings with respect to original uncompressed embeddings. The proposed method is task-agnostic and does not require further language modeling pre-training. Our method significantly outperforms the commonly used SVD-based matrix-factorization approach in terms of initial language model Perplexity. Moreover, we evaluate our proposed approach over SQuAD v1.1 dataset and several downstream tasks from the GLUE benchmark, where we also outperform the baseline in most scenarios. Our code is public.</abstract> <identifier type="citekey">balazy-etal-2021-direction</identifier> <identifier type="doi">10.18653/v1/2021.repl4nlp-1.32</identifier> <location> <url>https://aclanthology.org/2021.repl4nlp-1.32/</url> </location> <part> <date>2021-08</date> <extent unit="page"> <start>322</start> <end>330</end> </extent> </part></mods></modsCollection>
%0 Conference Proceedings%T Direction is what you need: Improving Word Embedding Compression in Large Language Models%A Bałazy, Klaudia%A Banaei, Mohammadreza%A Lebret, Rémi%A Tabor, Jacek%A Aberer, Karl%Y Rogers, Anna%Y Calixto, Iacer%Y Vulić, Ivan%Y Saphra, Naomi%Y Kassner, Nora%Y Camburu, Oana-Maria%Y Bansal, Trapit%Y Shwartz, Vered%S Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)%D 2021%8 August%I Association for Computational Linguistics%C Online%F balazy-etal-2021-direction%X The adoption of Transformer-based models in natural language processing (NLP) has led to great success using a massive number of parameters. However, due to deployment constraints in edge devices, there has been a rising interest in the compression of these models to improve their inference time and memory footprint. This paper presents a novel loss objective to compress token embeddings in the Transformer-based models by leveraging an AutoEncoder architecture. More specifically, we emphasize the importance of the direction of compressed embeddings with respect to original uncompressed embeddings. The proposed method is task-agnostic and does not require further language modeling pre-training. Our method significantly outperforms the commonly used SVD-based matrix-factorization approach in terms of initial language model Perplexity. Moreover, we evaluate our proposed approach over SQuAD v1.1 dataset and several downstream tasks from the GLUE benchmark, where we also outperform the baseline in most scenarios. Our code is public.%R 10.18653/v1/2021.repl4nlp-1.32%U https://aclanthology.org/2021.repl4nlp-1.32/%U https://doi.org/10.18653/v1/2021.repl4nlp-1.32%P 322-330
[Direction is what you need: Improving Word Embedding Compression in Large Language Models](https://aclanthology.org/2021.repl4nlp-1.32/) (Bałazy et al., RepL4NLP 2021)