
Contents
More
Returns the optimized model used by onnxruntime beforerunning computing the inference.
onx – ModelProto
level – optimization level,‘ORT_ENABLE_BASIC’,‘ORT_ENABLE_EXTENDED’,‘ORT_ENABLE_ALL’
output – output file if the proposed cache is not wanted
optimized model
Default backend based ononnxruntime.InferenceSession.Data is not copied.
input_names – input names
onx – onnx model
Wraps classonnxruntime.InferenceSessionto have a signature closer to python function.
tensor_class – class tensor such asNumpyTensor
input_names – input names
onx – onnx model
f – unused except in error messages
_hold –onnxruntime does not copy the data if it comesfrom a numpy array on CPU it does not hold any reference on it._hold is used to stored the underlying numpy array hosting thedata for an OrtTensor if it comes from it. It ensuresthe garbage collector does not remove it.
Creates a python function calling the onnx backendused by this class.
onx – onnx model
python function
Returns the dimensions of the tensor.First dimension is the batch dimension if the tensorhas more than one dimension. It is always left undefined.
Creates an instance ofOrtTensor from a numpy array.Relies onortvalue_from_numpy.A copy of the data in the Numpy object is held by theC_OrtValue only if the device isnot cpu.Any expression such asfrom_array(x.copy()), orfrom_array(x.astype(np.float32)), … creates an intermediatevariable scheduled to be deleted by the garbage collectoras soon as the function returns. In that case, the bufferholding the values is deleted and the instanceOrtTenoris no longer equal to the original value:assert_allclose(value, tensor.numpy()) is false.value must remain alive as long as theOrtTensor is.
value – value
device – CPU, GPU, value such asOrtTensor.CPU,OrtTensor.CUDA0
instance ofOrtTensor
Returns the tensor type of this tensor.
Returns the tensor type of this tensor.This property is used to define a key used to cache a jitted function.Same keys keys means same ONNX graph.Different keys usually means same ONNX graph but differentinput shapes.
name – name of the constraint
Returns the value of this tensor as a numpy array.
Merges two profiles produced by functionort_profile.
prof1 – first profile
prof2 – second profile
suffixes – used by pandas merge
by_column – the second profile merged by input, output shapes and typesplus an additional column, usuallyNone,‘idx’ or‘op_name’
merged profiles
Profiles the execution of an onnx graph with onnxruntime.
filename_or_bytes – filename or bytes
feeds – inputs, dictionary of numpy arrays
sess_options – instance ofonnxruntime.SessionOptions
disable_optimization – disable onnxruntime optimization
repeat – number of times to run the inference
as_df – returns the
providers – list of providers to use when initializing the inference session,if None, the default value is[“CPUExecutionProvider”]
first_it_out – if aggregated, leaves the first iteration out
agg – aggregate by event
agg_op_name – aggregate on operator name or operator index
kwargs – additional parameters when initializing the inference session
DataFrame or dictionary