conv1d#
- classtorch.ao.nn.quantized.functional.conv1d(input,weight,bias,stride=1,padding=0,dilation=1,groups=1,padding_mode='zeros',scale=1.0,zero_point=0,dtype=torch.quint8)[source]#
Applies a 1D convolution over a quantized 1D input composed of several inputplanes.
See
Conv1dfor details and output shape.- Parameters
input – quantized input tensor of shape
weight – quantized filters of shape
bias –non-quantized bias tensor of shape. The tensor type must betorch.float.
stride – the stride of the convolving kernel. Can be a single number or atuple(sW,). Default: 1
padding – implicit paddings on both sides of the input. Can be asingle number or a tuple(padW,). Default: 0
dilation – the spacing between kernel elements. Can be a single number ora tuple(dW,). Default: 1
groups – split input into groups, should be divisible by thenumber of groups. Default: 1
padding_mode – the padding mode to use. Only “zeros” is supported for quantized convolution at the moment. Default: “zeros”
scale – quantization scale for the output. Default: 1.0
zero_point – quantization zero_point for the output. Default: 0
dtype – quantization data type to use. Default:
torch.quint8
Examples:
>>>fromtorch.ao.nn.quantizedimportfunctionalasqF>>>filters=torch.randn(33,16,3,dtype=torch.float)>>>inputs=torch.randn(20,16,50,dtype=torch.float)>>>bias=torch.randn(33,dtype=torch.float)>>>>>>scale,zero_point=1.0,0>>>dtype_inputs=torch.quint8>>>dtype_filters=torch.qint8>>>>>>q_filters=torch.quantize_per_tensor(filters,scale,zero_point,dtype_filters)>>>q_inputs=torch.quantize_per_tensor(inputs,scale,zero_point,dtype_inputs)>>>qF.conv1d(q_inputs,q_filters,bias,padding=1,scale=scale,zero_point=zero_point)