Rate this Page

prepare_qat#

classtorch.ao.quantization.prepare_qat(model,mapping=None,inplace=False)[source]#

Prepares a copy of the model for quantization calibration orquantization-aware training and converts it to quantized version.

Quantization configuration should be assigned preemptivelyto individual submodules in.qconfig attribute.

Parameters
  • model – input model to be modified in-place

  • mapping – dictionary that maps float modules to quantized modules to bereplaced.

  • inplace – carry out model transformations in-place, the original moduleis mutated