quantize#
- classtorch.ao.quantization.quantize(model,run_fn,run_args,mapping=None,inplace=False)[source]#
Quantize the input float model with post training static quantization.
First it will prepare the model for calibration, then it callsrun_fn which will run the calibration step, after that we willconvert the model to a quantized model.
- Parameters
model – input float model
run_fn – a calibration function for calibrating the prepared model
run_args – positional arguments forrun_fn
inplace – carry out model transformations in-place, the original module is mutated
mapping – correspondence between original module types and quantized counterparts
- Returns
Quantized model.