

Overview
TensorRT Python API Reference
The general TensorRT workflow consists of 3 steps:
Populate atensorrt.INetworkDefinition either with a parser or by using the TensorRT Network API (seetensorrt.INetworkDefinition for more details). Thetensorrt.Builder can be used to generate an emptytensorrt.INetworkDefinition .
Use thetensorrt.Builder to build atensorrt.ICudaEngine using the populatedtensorrt.INetworkDefinition .
Create atensorrt.IExecutionContext from thetensorrt.ICudaEngine and use it to perform optimized inference.
Most other TensorRT classes use a logger to report errors, warnings and informative messages. TensorRT provides a basictensorrt.Logger implementation, but you can write your own implementation by deriving fromtensorrt.ILogger for more advanced functionality.
Parsers are used to populate atensorrt.INetworkDefinition from a model trained in a Deep Learning framework.
Thetensorrt.INetworkDefinition represents a computational graph. In order to populate the network, TensorRT provides a suite of parsers for a variety of Deep Learning frameworks. It is also possible to populate the network manually using the Network API.
Thetensorrt.Builder is used to build atensorrt.ICudaEngine . In order to do so, it must be provided a populatedtensorrt.INetworkDefinition .
Thetensorrt.ICudaEngine is the output of the TensorRT optimizer. It is used to generate atensorrt.IExecutionContext that can perform inference.