Rate this Page

ConvTranspose2d#

classtorch.ao.nn.quantized.ConvTranspose2d(in_channels,out_channels,kernel_size,stride=1,padding=0,output_padding=0,groups=1,bias=True,dilation=1,padding_mode='zeros',device=None,dtype=None)[source]#

Applies a 2D transposed convolution operator over an input imagecomposed of several input planes.For details on input arguments, parameters, and implementation seeConvTranspose2d.

For special notes, please, seeConv2d

Variables
  • weight (Tensor) – packed tensor derived from the learnable weightparameter.

  • scale (Tensor) – scalar for the output scale

  • zero_point (Tensor) – scalar for the output zero point

SeeConvTranspose2d for other attributes.

Examples:

>>># QNNPACK or FBGEMM as backend>>>torch.backends.quantized.engine='qnnpack'>>># With square kernels and equal stride>>>importtorch.ao.nn.quantizedasnnq>>>m=nnq.ConvTranspose2d(16,33,3,stride=2)>>># non-square kernels and unequal stride and with padding>>>m=nnq.ConvTranspose2d(16,33,(3,5),stride=(2,1),padding=(4,2))>>>input=torch.randn(20,16,50,100)>>>q_input=torch.quantize_per_tensor(input,scale=1.0,zero_point=0,dtype=torch.quint8)>>>output=m(q_input)>>># exact output size can be also specified as an argument>>>input=torch.randn(1,16,12,12)>>>q_input=torch.quantize_per_tensor(input,scale=1.0,zero_point=0,dtype=torch.quint8)>>>downsample=nnq.Conv2d(16,16,3,stride=2,padding=1)>>>upsample=nnq.ConvTranspose2d(16,16,3,stride=2,padding=1)>>>h=downsample(q_input)>>>h.size()torch.Size([1, 16, 6, 6])>>>output=upsample(h,output_size=input.size())>>>output.size()torch.Size([1, 16, 12, 12])