0% found this document useful (0 votes)
48 views11 pages

A1

Uploaded by

Aditya Vardhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
48 views11 pages

A1

Uploaded by

Aditya Vardhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 11
Attention Is All You Need Ashish Vaswar Noam Shazeer™ NikiParmar* Jakob Usekoreit™ Google Brain Google Brain Google Research Google Research. [email protected] [email protected] [email protected] [email protected] Lion Jones Aidan N. Gomez" * Lukasz Kaiser” Google Research University of Toronto Google Brain [email protected] [email protected] _—[email protected] Ilia Polosukhin* * iia. polosukhintgrail.com Abstract ‘The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurtence and convolutions entirely, Experiments on two machine translation tasks show these models to bbe superior in quality while being mote parallelizable and requiring significantly less time to train, Our model achieves 28.4 BLEU on the WMT 2014 English- to-German translation task, improving over the existing best results, including ‘ensembles, by over 2 BLEU. Oa the WMT 2014 English-to-French tanslation task, ‘our made! establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the ‘best models from the literature, 1 Introduction Recurrent neural networks, long short-term memory [12] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and. ‘transduction problems such as language modeling and machine translation (29, 2, 5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures (31, 21, 13] jal contbaton ising ore srandom, Jao’ proposed replacing RNs with slf-atention and stated the effort to evaluate this idea. Ashish, with li, designe and implemented the fst Transformer models and hasbeen ell involved in every aspect ofthis work, Nown proposed sealed dt-produt attention, mllchead atcstion andthe parameter-fce poston representation and became the ther person invalved in neatly every ‘eta. Niki designed, implemented, tuned and evaluated eoutles mode varias in ou original eadchase ad tenior2ensorLlton also experimented with novel model variants, was responsable frou ntl codebase, and efcient inference and visualizations, Lukas? and Aidan spent countless lng days designing various pat of and Implementing tenso2tnsor, pacing our ear codebase, geal mpoving reals and massively acceleraing ‘our research "Work performed wile at Google Brain ‘Work performed while at Google Research, lst Conference on Neural Information Processing Systems (N IPS 2017), Long Beach, CA, USA. Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states fy, a8 a function of the previous hidden state h, and the input for position ¢. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [18] and conditions ‘computation [26], while also improving model performance in case of the later. The fundamental constraint of sequential computation, however, remains. Attention mechanisms have become an integral part of compelling sequence modeling and transduc- tion models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 16) In all but a few cases [22], however such attention mechanisms are used in conjunction with a recurrent network. In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output ‘The Transformer allows for significantly more parallelization and can reach & new slate of the art in translation quality after being tained for as litte as twelve hours on eight PLO GPUs. 2 Background ‘The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [20], ByteNet [15] and ConvS2S [8], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals om (wo axbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to Tearn dependencies between distant positions [1]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2, Self-attention, sometimes called intra-atention isan attention mechanism relating different positions ‘of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 22, 23, 19] End-to-end memory networks are based on a recurrent attention mechanism instead of sequence- aligned recurrence and have been shown to perform well on simple-language question answering and. language modeling tasks [28] ‘To the best of our knowledge, however, the Transformer is the first transduction model relying catizely on self-attention to compute representations of its input and output without using sequence- aligned RNNs or convolution. Inthe following sections, we will describe the Transformer, motivate selF-aitention and discuss its advantages over models such as [14, 15] and [8] 3. Model Architecture ‘Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2,29] Here, the encoder maps an input sequence of symbol representations (11,....r,) t0 a sequence ‘of continuous representations z = (21,..,2,). Given z, the decoder then generates an output sequence (y,.~. 3x) of symbols one element at a ime. At each step the model is aulo-regressive [9], consuming the previously generated symbols as additional input when generating the next. ‘The Transformer follows this overall architecture using stacked self-attention and point-wise, fully ‘connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively. 3.1 Encoder and Decoder Stacks Encoder: The encoder is composed of a stack of N = 6 identical layers. Bach layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second isa simple, position- Output Probabiltes Encoding QO 9D Enoosng Treat Cup Emoosding Emeosding Inputs Outputs (hited right Figure 1: The Transformer - model architecture, ‘wise fully connected feed-forward network, We employ a residual connection [10] around each of the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is LayerNorma(sr + Sublayer(2:)), where Sublayer() is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dos ~ 512. Decoder: The decoder is also composed ofa stack of NV = 6 identical layers. In addition (othe two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output ofthe encoder stack. Similar tothe encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This ‘masking, combined with fact thatthe output embeddings are offset by one position, ensures that the predictions for position ican depend only on the known outputs at positions less than i 3.2 Attention ‘An attention function can be described as mapping a query and a set of key-value paits to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the {query with the corresponding key. 3.2.1 Scaled Dot-Product Attention We call our particular atention "Scaled Dot-Product Attention" (igure 2). The input consists of {queries and Keys of dimension dj, and values of dimension dj. We compute the dot products of the Scaled Dot-Product Attention ‘Multi-Head Attention t Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel. ‘query with all keys, divide each by v/de, and apply a softmax function to obtain the weights on the values, In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q. The keys and values are also packed together into matrices K and V. We compute the matrix of outputs as ax? va Attention(Q, K, V) = softmax! Vv a “The two most commonly used attention functions are additive attention [2], and dot-product(mmulti- plicative) attention. Dotproduct attention is identical to our algorithm, except for the scaling factor ‘of ie. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are simailae in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized ‘matrix multiplication code Wile forall values fd the two mechan perfor smilie tention outperforms dot product attention without scaling for larger values of dy, [3]. We suspect that for large values of sh the dx products prow ug in magni, pushing the soln function x epon whee bat ‘xiemely sal grades To counteract this eet, we seal the dot products by 3.2.2 Multi-Head Attention Instead of performing a single attention function with dyoie dimensional keys, values and queries, ‘we found it beneficial to linearly project the queries, keys and values h times with different, learned linear projections to di, di, and dy dimensions, respectively. On each of these projected versions of, ‘queries, keys and values we then perform the attention function in parallel, yielding 4, dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2. ‘Multi-head attention allows the model to jointly atend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. “Holst wy he dot produce get larg, assume thatthe components of g and are independent random ‘variables with can O and vaiange 1. Then ther dot product, q- = Sv, qk, has mean O ad variance di Multifiead(Q, K,V) = Concat (heads, .., head, )W where head; = Attention(QW?, KW, VW!) Where the projections are parameter matrices W/2 ¢ t=, W/ and WO € Rhee ete, WV @ Rbestrde In this work we employ h — 8 parallel attention layers, or heads. For each of these we use di = dy = dyoga/h = 64, Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. 3.2.3 Applications of Attention in our Model ‘The Transformer uses multi-head attention in three different wai ‘* In “encoder-decoder attention” layers, the queries come from the previous decoder layer, ‘and the memory Keys and values come from the output of the encoder. This allows every position in the decoder co attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as (1. 2,8), ‘© The encoder contains self-attention layers. In a self-attention layer all ofthe keys, values ‘and queries come from the same place. inthis case, the output of the previous layer in the ‘encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder, ‘Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position, We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to —>c) all values in the input ‘of the softmax which correspond to illegal connections, See Figure 2 33 Position-wise Feed-Forward Networks In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully ‘connected feed-forward network, which is applied to each position separately and identically. This ‘consists of two linear transformations with a ReL.U activation in between. FF (x) = max(0,2Wy +b) We + bz 2 While the linear wansformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kemel size 1 ‘The dimensionality of input and output is dys) = 512, and the inner-layer has dimensionality yy = 2048. 3.4 Embeddings and Softmax Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension daa. We also use the usual learned linear transfor ‘mation and softmax function to convert the decoder output to predicted next-token probabilities. In ‘our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [24]. In the embedding layers, we multiply those weights by Va 3s Positional Encoding. Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position ofthe tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the Table 1: Maximum path lengths, perlayer complexity and minimum number of sequential operations for different layer types. n is the sequence length, dis the representation dimension, k is the kemel size of convolutions and r the size ofthe neighborhood in restricted self-atention, Layer Type ‘Complexity per Layer Sequential Maximum Path Length Operations Sarat OC ASC

You might also like