Rate this Page

GRU#

classtorch.ao.nn.quantized.dynamic.GRU(*args,**kwargs)[source]#

Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.

For each element in the input sequence, each layer computes the followingfunction:

rt=σ(Wirxt+bir+Whrh(t1)+bhr)zt=σ(Wizxt+biz+Whzh(t1)+bhz)nt=tanh(Winxt+bin+rt(Whnh(t1)+bhn))ht=(1zt)nt+zth(t1)\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t \odot (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) \odot n_t + z_t \odot h_{(t-1)}\end{array}

wherehth_t is the hidden state at timet,xtx_t is the inputat timet,h(t1)h_{(t-1)} is the hidden state of the layerat timet-1 or the initial hidden state at time0, andrtr_t,ztz_t,ntn_t are the reset, update, and new gates, respectively.σ\sigma is the sigmoid function, and\odot is the Hadamard product.

In a multilayer GRU, the inputxt(l)x^{(l)}_t of thell -th layer(l>=2l >= 2) is the hidden stateht(l1)h^{(l-1)}_t of the previous layer multiplied bydropoutδt(l1)\delta^{(l-1)}_t where eachδt(l1)\delta^{(l-1)}_t is a Bernoulli randomvariable which is00 with probabilitydropout.

Parameters
  • input_size – The number of expected features in the inputx

  • hidden_size – The number of features in the hidden stateh

  • num_layers – Number of recurrent layers. E.g., settingnum_layers=2would mean stacking two GRUs together to form astacked GRU,with the second GRU taking in outputs of the first GRU andcomputing the final results. Default: 1

  • bias – IfFalse, then the layer does not use bias weightsb_ih andb_hh.Default:True

  • batch_first – IfTrue, then the input and output tensors are providedas (batch, seq, feature). Default:False

  • dropout – If non-zero, introduces aDropout layer on the outputs of eachGRU layer except the last layer, with dropout probability equal todropout. Default: 0

  • bidirectional – IfTrue, becomes a bidirectional GRU. Default:False

Inputs: input, h_0
  • input of shape(seq_len, batch, input_size): tensor containing the featuresof the input sequence. The input can also be a packed variable lengthsequence. Seetorch.nn.utils.rnn.pack_padded_sequence()for details.

  • h_0 of shape(num_layers * num_directions, batch, hidden_size): tensorcontaining the initial hidden state for each element in the batch.Defaults to zero if not provided. If the RNN is bidirectional,num_directions should be 2, else it should be 1.

Outputs: output, h_n
  • output of shape(seq_len, batch, num_directions * hidden_size): tensorcontaining the output features h_t from the last layer of the GRU,for eacht. If atorch.nn.utils.rnn.PackedSequence has beengiven as the input, the output will also be a packed sequence.For the unpacked case, the directions can be separatedusingoutput.view(seq_len,batch,num_directions,hidden_size),with forward and backward being direction0 and1 respectively.

    Similarly, the directions can be separated in the packed case.

  • h_n of shape(num_layers * num_directions, batch, hidden_size): tensorcontaining the hidden state fort = seq_len

    Likeoutput, the layers can be separated usingh_n.view(num_layers,num_directions,batch,hidden_size).

Shape:
Variables
  • weight_ih_l[k] – the learnable input-hidden weights of thekth\text{k}^{th} layer(W_ir|W_iz|W_in), of shape(3*hidden_size, input_size) fork = 0.Otherwise, the shape is(3*hidden_size, num_directions * hidden_size)

  • weight_hh_l[k] – the learnable hidden-hidden weights of thekth\text{k}^{th} layer(W_hr|W_hz|W_hn), of shape(3*hidden_size, hidden_size)

  • bias_ih_l[k] – the learnable input-hidden bias of thekth\text{k}^{th} layer(b_ir|b_iz|b_in), of shape(3*hidden_size)

  • bias_hh_l[k] – the learnable hidden-hidden bias of thekth\text{k}^{th} layer(b_hr|b_hz|b_hn), of shape(3*hidden_size)

Note

All the weights and biases are initialized fromU(k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})wherek=1hidden_sizek = \frac{1}{\text{hidden\_size}}

Note

The calculation of new gatentn_t subtly differs from the original paper and other frameworks.In the original implementation, the Hadamard product()(\odot) betweenrtr_t and theprevious hidden stateh(t1)h_{(t-1)} is done before the multiplication with the weight matrixW and addition of bias:

nt=tanh(Winxt+bin+Whn(rth(t1))+bhn)\begin{aligned} n_t = \tanh(W_{in} x_t + b_{in} + W_{hn} ( r_t \odot h_{(t-1)} ) + b_{hn})\end{aligned}

This is in contrast to PyTorch implementation, which is done afterWhnh(t1)W_{hn} h_{(t-1)}

nt=tanh(Winxt+bin+rt(Whnh(t1)+bhn))\begin{aligned} n_t = \tanh(W_{in} x_t + b_{in} + r_t \odot (W_{hn} h_{(t-1)}+ b_{hn}))\end{aligned}

This implementation differs on purpose for efficiency.

Note

If the following conditions are satisfied:1) cudnn is enabled,2) input data is on the GPU3) input data has dtypetorch.float164) V100 GPU is used,5) input data is not inPackedSequence formatpersistent algorithm can be selected to improve performance.

Examples:

>>>rnn=nn.GRU(10,20,2)>>>input=torch.randn(5,3,10)>>>h0=torch.randn(2,3,20)>>>output,hn=rnn(input,h0)