FeatureAlphaDropout#
- classtorch.nn.FeatureAlphaDropout(p=0.5,inplace=False)[source]#
Randomly masks out entire channels.
A channel is a feature map,e.g. the-th channel of the-th sample in the batch inputis a tensor of the input tensor). Instead ofsetting activations to zero, as in regular Dropout, the activations are setto the negative saturation value of the SELU activation function. More detailscan be found in the paperSelf-Normalizing Neural Networks .
Each element will be masked independently for each sample on every forwardcall with probability
pusing samples from a Bernoulli distribution.The elements to be masked are randomized on every forward call, and scaledand shifted to maintain zero mean and unit variance.Usually the input comes from
nn.AlphaDropoutmodules.As described in the paperEfficient Object Localization Using Convolutional Networks ,if adjacent pixels within feature maps are strongly correlated(as is normally the case in early convolution layers) then i.i.d. dropoutwill not regularize the activations and will otherwise just resultin an effective learning rate decrease.
In this case,
nn.AlphaDropout()will help promote independence betweenfeature maps and should be used instead.- Parameters
- Shape:
Input: or.
Output: or (same shape as input).
Examples:
>>>m=nn.FeatureAlphaDropout(p=0.2)>>>input=torch.randn(20,16,4,32,32)>>>output=m(input)