Rate this Page

GRU#

classtorch.nn.GRU(input_size,hidden_size,num_layers=1,bias=True,batch_first=False,dropout=0.0,bidirectional=False,device=None,dtype=None)[source]#

Apply a multi-layer gated recurrent unit (GRU) RNN to an input sequence.For each element in the input sequence, each layer computes the followingfunction:

rt=σ(Wirxt+bir+Whrh(t1)+bhr)zt=σ(Wizxt+biz+Whzh(t1)+bhz)nt=tanh(Winxt+bin+rt(Whnh(t1)+bhn))ht=(1zt)nt+zth(t1)\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t \odot (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) \odot n_t + z_t \odot h_{(t-1)}\end{array}

wherehth_t is the hidden state at timet,xtx_t is the inputat timet,h(t1)h_{(t-1)} is the hidden state of the layerat timet-1 or the initial hidden state at time0, andrtr_t,ztz_t,ntn_t are the reset, update, and new gates, respectively.σ\sigma is the sigmoid function, and\odot is the Hadamard product.

In a multilayer GRU, the inputxt(l)x^{(l)}_t of thell -th layer(l2l \ge 2) is the hidden stateht(l1)h^{(l-1)}_t of the previous layer multiplied bydropoutδt(l1)\delta^{(l-1)}_t where eachδt(l1)\delta^{(l-1)}_t is a Bernoulli randomvariable which is00 with probabilitydropout.

Parameters:
  • input_size – The number of expected features in the inputx

  • hidden_size – The number of features in the hidden stateh

  • num_layers – Number of recurrent layers. E.g., settingnum_layers=2would mean stacking two GRUs together to form astacked GRU,with the second GRU taking in outputs of the first GRU andcomputing the final results. Default: 1

  • bias – IfFalse, then the layer does not use bias weightsb_ih andb_hh.Default:True

  • batch_first – IfTrue, then the input and output tensors are providedas(batch, seq, feature) instead of(seq, batch, feature).Note that this does not apply to hidden or cell states. See theInputs/Outputs sections below for details. Default:False

  • dropout – If non-zero, introduces aDropout layer on the outputs of eachGRU layer except the last layer, with dropout probability equal todropout. Default: 0

  • bidirectional – IfTrue, becomes a bidirectional GRU. Default:False

Inputs: input, h_0

where:

N=batch sizeL=sequence lengthD=2 if bidirectional=True otherwise 1Hin=input_sizeHout=hidden_size\begin{aligned} N ={} & \text{batch size} \\ L ={} & \text{sequence length} \\ D ={} & 2 \text{ if bidirectional=True otherwise } 1 \\ H_{in} ={} & \text{input\_size} \\ H_{out} ={} & \text{hidden\_size}\end{aligned}
Outputs: output, h_n
Variables:
  • weight_ih_l[k] – the learnable input-hidden weights of thekth\text{k}^{th} layer(W_ir|W_iz|W_in), of shape(3*hidden_size, input_size) fork = 0.Otherwise, the shape is(3*hidden_size, num_directions * hidden_size)

  • weight_hh_l[k] – the learnable hidden-hidden weights of thekth\text{k}^{th} layer(W_hr|W_hz|W_hn), of shape(3*hidden_size, hidden_size)

  • bias_ih_l[k] – the learnable input-hidden bias of thekth\text{k}^{th} layer(b_ir|b_iz|b_in), of shape(3*hidden_size)

  • bias_hh_l[k] – the learnable hidden-hidden bias of thekth\text{k}^{th} layer(b_hr|b_hz|b_hn), of shape(3*hidden_size)

Note

All the weights and biases are initialized fromU(k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})wherek=1hidden_sizek = \frac{1}{\text{hidden\_size}}

Note

For bidirectional GRUs, forward and backward are directions 0 and 1 respectively.Example of splitting the output layers whenbatch_first=False:output.view(seq_len,batch,num_directions,hidden_size).

Note

batch_first argument is ignored for unbatched inputs.

Note

The calculation of new gatentn_t subtly differs from the original paper and other frameworks.In the original implementation, the Hadamard product()(\odot) betweenrtr_t and theprevious hidden stateh(t1)h_{(t-1)} is done before the multiplication with the weight matrixW and addition of bias:

nt=tanh(Winxt+bin+Whn(rth(t1))+bhn)\begin{aligned} n_t = \tanh(W_{in} x_t + b_{in} + W_{hn} ( r_t \odot h_{(t-1)} ) + b_{hn})\end{aligned}

This is in contrast to PyTorch implementation, which is done afterWhnh(t1)W_{hn} h_{(t-1)}

nt=tanh(Winxt+bin+rt(Whnh(t1)+bhn))\begin{aligned} n_t = \tanh(W_{in} x_t + b_{in} + r_t \odot (W_{hn} h_{(t-1)}+ b_{hn}))\end{aligned}

This implementation differs on purpose for efficiency.

Note

If the following conditions are satisfied:1) cudnn is enabled,2) input data is on the GPU3) input data has dtypetorch.float164) V100 GPU is used,5) input data is not inPackedSequence formatpersistent algorithm can be selected to improve performance.

Examples:

>>>rnn=nn.GRU(10,20,2)>>>input=torch.randn(5,3,10)>>>h0=torch.randn(2,3,20)>>>output,hn=rnn(input,h0)