MaxPool1d#
- classtorch.nn.MaxPool1d(kernel_size,stride=None,padding=0,dilation=1,return_indices=False,ceil_mode=False)[source]#
Applies a 1D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input sizeand output can be precisely described as:
If
paddingis non-zero, then the input is implicitly padded with negative infinity on both sidesforpaddingnumber of points.dilationis the stride between the elements within thesliding window. Thislink has a nice visualization of the pooling parameters.Note
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left paddingor the input. Sliding windows that would start in the right padded region are ignored.
- Parameters
kernel_size (Union[int,tuple[int]]) – The size of the sliding window, must be > 0.
stride (Union[int,tuple[int]]) – The stride of the sliding window, must be > 0. Default value is
kernel_size.padding (Union[int,tuple[int]]) – Implicit negative infinity padding to be added on both sides, must be >= 0 and <= kernel_size / 2.
dilation (Union[int,tuple[int]]) – The stride between elements within a sliding window, must be > 0.
return_indices (bool) – If
True, will return the argmax along with the max values.Useful fortorch.nn.MaxUnpool1dlaterceil_mode (bool) – If
True, will useceil instead offloor to compute the output shape. Thisensures that every element in the input tensor is covered by a sliding window.
- Shape:
Input: or.
Output: or,
where
ceil_mode=Falsewhere
ceil_mode=TrueEnsure that the last pooling starts inside the image, makewhen.
Examples:
>>># pool of size=3, stride=2>>>m=nn.MaxPool1d(3,stride=2)>>>input=torch.randn(20,16,50)>>>output=m(input)