Rate this Page

BatchNorm1d#

classtorch.nn.BatchNorm1d(num_features,eps=1e-05,momentum=0.1,affine=True,track_running_stats=True,device=None,dtype=None)[source]#

Applies Batch Normalization over a 2D or 3D input.

Method described in the paperBatch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift .

y=xE[x]Var[x]+ϵγ+βy = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta

The mean and standard-deviation are calculated per-dimension overthe mini-batches andγ\gamma andβ\beta are learnable parameter vectorsof sizeC (whereC is the number of features or channels of the input). By default, theelements ofγ\gamma are set to 1 and the elements ofβ\beta are set to 0.At train time in the forward pass, the variance is calculated via the biased estimator,equivalent totorch.var(input,unbiased=False). However, the value stored in themoving average of the variance is calculated via the unbiased estimator, equivalent totorch.var(input,unbiased=True).

Also by default, during training this layer keeps running estimates of itscomputed mean and variance, which are then used for normalization duringevaluation. The running estimates are kept with a defaultmomentumof 0.1.

Iftrack_running_stats is set toFalse, this layer then does notkeep running estimates, and batch statistics are instead used duringevaluation time as well.

Note

Thismomentum argument is different from one used in optimizerclasses and the conventional notion of momentum. Mathematically, theupdate rule for running statistics here isx^new=(1momentum)×x^+momentum×xt\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t,wherex^\hat{x} is the estimated statistic andxtx_t is thenew observed value.

Because the Batch Normalization is done over theC dimension, computing statisticson(N, L) slices, it’s common terminology to call this Temporal Batch Normalization.

Parameters
  • num_features (int) – number of features or channelsCC of the input

  • eps (float) – a value added to the denominator for numerical stability.Default: 1e-5

  • momentum (Optional[float]) – the value used for the running_mean and running_varcomputation. Can be set toNone for cumulative moving average(i.e. simple average). Default: 0.1

  • affine (bool) – a boolean value that when set toTrue, this module haslearnable affine parameters. Default:True

  • track_running_stats (bool) – a boolean value that when set toTrue, thismodule tracks the running mean and variance, and when set toFalse,this module does not track such statistics, and initializes statisticsbuffersrunning_mean andrunning_var asNone.When these buffers areNone, this module always uses batch statistics.in both training and eval modes. Default:True

Shape:
  • Input:(N,C)(N, C) or(N,C,L)(N, C, L), whereNN is the batch size,CC is the number of features or channels, andLL is the sequence length

  • Output:(N,C)(N, C) or(N,C,L)(N, C, L) (same shape as input)

Examples:

>>># With Learnable Parameters>>>m=nn.BatchNorm1d(100)>>># Without Learnable Parameters>>>m=nn.BatchNorm1d(100,affine=False)>>>input=torch.randn(20,100)>>>output=m(input)