| Attention Is All You Need | |
|---|---|
An illustration of main components of the transformer model from the paper | |
| Project type | Artificial intelligence research |
| Sponsors | |
| Objective | Provide a novel approach to train AI |
| Duration | 2017; 9 years ago (2017) – |
| Website | proceedings |
"Attention Is All You Need"[1] is a 2017 research paper inmachine learning authored by eight scientists working atGoogle. The paper introduced a newdeep learning architecture known as thetransformer, based on theattention mechanism proposed in 2014 by Bahdanauet al.[2] The transformer approach it describes has become the main architecture of a wide variety of AI, such aslarge language models.[3][4] At the time, the focus of the research was on improvingSeq2seq techniques formachine translation, but the authors go further in the paper, foreseeing the technique's potential for other tasks likequestion answering and what is now known asmultimodalgenerative AI.[1]
Some early examples that the team tried their Transformer architecture on included English-to-German translation, generating Wikipedia articles on "The Transformer", andparsing. These convinced the team that the Transformer is a general-purpose language model, and not just good for translation.[5]
As of 2025,[update] the paper has been cited more than 173,000 times, placing it among the top ten most-cited papers of the 21st century.[6] After the paper was published by Google, each of the authors left the company to join other companies or to foundstartups.
The authors of the paper areAshish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit, Llion Jones,Aidan Gomez, Łukasz Kaiser, andIllia Polosukhin. All eight authors were "equal contributors" to the paper; the listed order was randomized (according to the paper itself). After the paper, each of the authors left Google to join other companies or to foundstartups.[7][8]
The paper's title is a reference to the song "All You Need Is Love" bythe Beatles.[9] The name "Transformer" was picked because Jakob Uszkoreit, one of the paper's authors, liked the sound of that word.[5] An early design document was titled "Transformers: Iterative Self-Attention and Processing for Various Tasks", and included an illustration of six characters from theTransformers franchise. The team was named Team Transformer.[9]
The paper is best known for introducing the Transformer architecture, which underlies most modernlarge language models (LLMs). A key reason why the architecture is preferred by most modern LLMs is the parallelizability of the architecture over its predecessors. This ensures that the operations necessary for training can be accelerated on a GPU, allowing both faster training times and models of bigger sizes to be trained.
The paper introduced the following mechanisms as part of the development of the transformer architecture.
Scaled dot-product attention & self-attention
The use of the scaled dot-product attention and self-attention mechanism instead of arecurrent neural network orlong short-term memory (which rely on recurrence instead) allows for better performance as described in the following paragraph. The paper described the scaled dot-product attention as follows:
where,, are respectively the query, key, value matrices, and is the dimension of the values.
Since the model relies on Query (Q), Key (K), and Value (V) matrices that come from the same source (i.e., the input sequence or context window), this eliminates the need for RNNs, completely ensuring parallelizability for the architecture. This differs from the original form of the Attention mechanism introduced in 2014. Additionally, the paper also discusses the use of an additional scaling factor that was found to be most effective with respect to the dimension of the key vectors (represented as and initially set to 64 within the paper) in the manner shown above.
In the specific context of translation, which the paper focused on, the Query and Key matrices are usually represented in embeddings corresponding to the source language, while the Value matrix corresponds to the target language.
Multi-head attention
In the self-attention mechanism, queries (Q), keys (K), and values (V) are dynamically generated for each input sequence (typically limited by the size of the context window), allowing the model to focus on different parts of the input sequence at different steps. Multi-head attention enhances this process by introducing multiple parallel attention heads. Each attention head learns different linear projections of the Q, K, and V matrices. This allows the model to capture different aspects of the relationships between words in the sequence simultaneously, rather than focusing on a single aspect.
By doing this, multi-head attention ensures that the input embeddings are updated from a more varied and diverse set of perspectives. After the attention outputs from all heads are calculated, they are concatenated and passed through a final linear transformation to generate the output.
Positional encoding
Since the Transformer does not rely on recurrence or convolution of the text in order to perform encoding and decoding, the paper relied on the use of sine and cosine wave functions to encode the position of the token into the embedding. The methods introduced in the paper are discussed below:
wherein,, correspond to the position of the word, the current dimension index, and the dimension of the model, respectively. The sine function is used for even indices of the embedding while the cosine function is used for odd indices. The resultant embedding is then added to the word at that corresponding position with respect to the current context window. The paper specifically comments on why this method was chosen describing:
"We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training."[1]
This section'stone or style may not reflect theencyclopedic tone used on Wikipedia. See Wikipedia'sguide to writing better articles for suggestions.(February 2026) (Learn how and when to remove this message) |
For many years, sequence modelling and generation was done by using plainrecurrent neural networks (RNNs). A well-cited early example was theElman network (1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice thevanishing-gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens.
A key breakthrough wasLSTM (1995),[note 1] an RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of anattention mechanism which used neurons that multiply the outputs of other neurons, so-calledmultiplicative units.[10] Neural networks using multiplicative units were later calledsigma-pi networks[11] orhigher-order networks.[12] LSTM became the standard architecture for long sequence modelling until the 2017 publication of transformers. However, LSTM still used sequential processing, like most other RNNs.[note 2] Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence.
Modern transformers overcome this problem, but unlike RNNs, they require computation time that isquadratic in the size of the context window. The linearly scalingfast weight controller (1992) learns to compute a weight matrix for further processing depending on the input.[13] One of its two networks has "fast weights" or "dynamic links" (1981).[14][15][16] A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries.[13] This was later shown to be equivalent to the unnormalized linear transformer.[17][18]
The idea of encoder–decoder sequence transduction had been developed in the early 2010s; commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014.[19][20][original research?]
A 380M-parameter model for machine translation uses twolong short-term memories (LSTM).[20] Its architecture consists of two parts. Theencoder is an LSTM that takes in a sequence of tokens and turns it into a vector. Thedecoder is another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model usedgated recurrent units (GRU) instead of LSTM.[19] Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq.[21][22]
These early seq2seq models had no attention mechanism, and the state vector is accessible only after thelast word of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into afixed-size output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation.[23]
TheRNN search model introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of thefixed-size output vector), allowing the model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation".[2]
The relative performances were compared between global (that ofRNN search) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time.[24]
In 2016,Google Translate was revamped toGoogle Neural Machine Translation, which replaced the previous model based onstatistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM.[25] It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop.[26]
Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard toparallelize, which prevented them from being accelerated on GPUs. In 2016,decomposable attention applied a self-attention mechanism tofeedforward networks, which are easy to parallelize, and achievedSOTA result intextual entailment with an order of magnitude fewer parameters than LSTMs.[27] One of its authors, Jakob Uszkoreit, suspected that attentionwithout recurrence would be sufficient for language translation, thus the title "attention isall you need".[28] That hypothesis was against conventional wisdom at the time, and even his fatherHans Uszkoreit, a well-known computational linguist, was skeptical.[28] In the same year, self-attention (calledintra-attention orintra-sentence attention) was proposed for LSTMs.[29]
In 2017, the original (100M-sized) encoder–decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improvingseq2seq formachine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance.[1] This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important factor to its widespread use in large neural networks.[30]
As early as spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles.[31] Transformer architecture is now used alongside manygenerative models that contribute to the ongoingAI boom.
In language modelling,ELMo (2018) was a bi-directional LSTM that produces contextualizedword embeddings, improving upon the line of research frombag of words andword2vec. It was followed byBERT (2018), an encoder-only transformer model.[32] In October 2019, Google started using BERT to process search queries.[33] In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a transformer-encoder–RNN-decoder model.[34]
Starting in 2018, the OpenAIGPT series of decoder-only transformers became state of the art innatural language generation. In 2022, a chatbot based on GPT-3,ChatGPT, became unexpectedly[35] popular, triggering a boom aroundlarge language models.[36][37]
Since 2020, transformers have been applied in modalities beyond text, including thevision transformer,[38] speech recognition,[39] robotics,[40] andmultimodal.[41] The vision transformer, in turn, stimulated new developments inconvolutional neural networks.[42] Image and video generators likeDALL-E (2021),Stable Diffusion 3 (2024),[43] andSora (2024), use transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data.
This sectionrelies excessively onreferences toprimary sources. Please improve this section by addingsecondary or tertiary sources. Find sources: "Attention Is All You Need" – news ·newspapers ·books ·scholar ·JSTOR(February 2026) (Learn how and when to remove this message) |
While the primary focus of the paper at the time was to improve machine translation, the paper also discussed the use of the architecture on EnglishConstituency Parsing, both with limited and large-sized datasets, achieving a high-score without specific tuning for the task indicating the promising nature of the model for use in a wide-variety of general purpose of seq2seq tasks.