Rate this Page

BatchNorm2d#

classtorch.nn.BatchNorm2d(num_features,eps=1e-05,momentum=0.1,affine=True,track_running_stats=True,device=None,dtype=None)[source]#

Applies Batch Normalization over a 4D input.

4D is a mini-batch of 2D inputswith additional channel dimension. Method described in the paperBatch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift .

y=xE[x]Var[x]+ϵγ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta

The mean and standard-deviation are calculated per-dimension overthe mini-batches andγ\gamma andβ\beta are learnable parameter vectorsof sizeC (whereC is the input size). By default, the elements ofγ\gamma are setto 1 and the elements ofβ\beta are set to 0. At train time in the forward pass, thestandard-deviation is calculated via the biased estimator, equivalent totorch.var(input,unbiased=False). However, the value stored in the moving average of thestandard-deviation is calculated via the unbiased estimator, equivalent totorch.var(input,unbiased=True).

Also by default, during training this layer keeps running estimates of itscomputed mean and variance, which are then used for normalization duringevaluation. The running estimates are kept with a defaultmomentumof 0.1.

Iftrack_running_stats is set toFalse, this layer then does notkeep running estimates, and batch statistics are instead used duringevaluation time as well.

Note

Thismomentum argument is different from one used in optimizerclasses and the conventional notion of momentum. Mathematically, theupdate rule for running statistics here isx^new=(1momentum)×x^+momentum×xt\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t,wherex^\hat{x} is the estimated statistic andxtx_t is thenew observed value.

Because the Batch Normalization is done over theC dimension, computing statisticson(N, H, W) slices, it’s common terminology to call this Spatial Batch Normalization.

Parameters
  • num_features (int) –CC from an expected input of size(N,C,H,W)(N, C, H, W)

  • eps (float) – a value added to the denominator for numerical stability.Default: 1e-5

  • momentum (Optional[float]) – the value used for the running_mean and running_varcomputation. Can be set toNone for cumulative moving average(i.e. simple average). Default: 0.1

  • affine (bool) – a boolean value that when set toTrue, this module haslearnable affine parameters. Default:True

  • track_running_stats (bool) – a boolean value that when set toTrue, thismodule tracks the running mean and variance, and when set toFalse,this module does not track such statistics, and initializes statisticsbuffersrunning_mean andrunning_var asNone.When these buffers areNone, this module always uses batch statistics.in both training and eval modes. Default:True

Shape:

Examples:

>>># With Learnable Parameters>>>m=nn.BatchNorm2d(100)>>># Without Learnable Parameters>>>m=nn.BatchNorm2d(100,affine=False)>>>input=torch.randn(20,100,35,45)>>>output=m(input)