Movatterモバイル変換


[0]ホーム

URL:


ContentsMenuExpandLight modeDark modeAuto light/dark mode
onnx-array-api 0.2.0 documentation
Logo
onnx-array-api 0.2.0 documentation

Contents

More

Back to top

ort#

ort_optimized_model#

onnx_array_api.ort.ort_optimizers.ort_optimized_model(onx:str|ModelProto,level:str='ORT_ENABLE_ALL',output:str|None=None)str|ModelProto[source]#

Returns the optimized model used by onnxruntime beforerunning computing the inference.

Parameters:
  • onx – ModelProto

  • level – optimization level,‘ORT_ENABLE_BASIC’,‘ORT_ENABLE_EXTENDED’,‘ORT_ENABLE_ALL’

  • output – output file if the proposed cache is not wanted

Returns:

optimized model

EagerOrtTensor#

classonnx_array_api.ort.ort_tensors.EagerOrtTensor(tensor:OrtValue|OrtTensor|ndarray,_hold:ndarray|None=None)[source]#

Defines a value foronnxruntime as a backend.

JitOrtTensor#

classonnx_array_api.ort.ort_tensors.JitOrtTensor(tensor:OrtValue|OrtTensor|ndarray,_hold:ndarray|None=None)[source]#

Defines a value foronnxruntime as a backend.

OrtTensor#

classonnx_array_api.ort.ort_tensors.OrtTensor(tensor:OrtValue|OrtTensor|ndarray,_hold:ndarray|None=None)[source]#

Default backend based ononnxruntime.InferenceSession.Data is not copied.

Parameters:
  • input_names – input names

  • onx – onnx model

classEvaluator(tensor_class:type,input_names:List[str],onx:ModelProto,f:Callable|None=None)[source]#

Wraps classonnxruntime.InferenceSessionto have a signature closer to python function.

Parameters:
  • tensor_class – class tensor such asNumpyTensor

  • input_names – input names

  • onx – onnx model

  • f – unused except in error messages

  • _holdonnxruntime does not copy the data if it comesfrom a numpy array on CPU it does not hold any reference on it._hold is used to stored the underlying numpy array hosting thedata for an OrtTensor if it comes from it. It ensuresthe garbage collector does not remove it.

run(*inputs:List[OrtTensor])List[OrtTensor][source]#

Executes the function.

Parameters:

inputs – function inputs

Returns:

outputs

classmethodcreate_function(input_names:List[str],onx:ModelProto,f:Callable)Callable[source]#

Creates a python function calling the onnx backendused by this class.

Parameters:

onx – onnx model

Returns:

python function

propertydims#

Returns the dimensions of the tensor.First dimension is the batch dimension if the tensorhas more than one dimension. It is always left undefined.

propertydtype:DType#

Returns the element type of this tensor.

staticfrom_array(value:ndarray,device:OrtDevice|None=None)OrtTensor[source]#

Creates an instance ofOrtTensor from a numpy array.Relies onortvalue_from_numpy.A copy of the data in the Numpy object is held by theC_OrtValue only if the device isnot cpu.Any expression such asfrom_array(x.copy()), orfrom_array(x.astype(np.float32)), … creates an intermediatevariable scheduled to be deleted by the garbage collectoras soon as the function returns. In that case, the bufferholding the values is deleted and the instanceOrtTenoris no longer equal to the original value:assert_allclose(value, tensor.numpy()) is false.value must remain alive as long as theOrtTensor is.

Parameters:
  • value – value

  • device – CPU, GPU, value such asOrtTensor.CPU,OrtTensor.CUDA0

Returns:

instance ofOrtTensor

propertykey:Any#

Unique key for a tensor of the same type.

numpy()ndarray[source]#

Converts theOrtValue into numpy array.

propertyshape:Tuple[int,...]#

Returns the shape of the tensor.

propertytensor_type:TensorType#

Returns the tensor type of this tensor.

tensor_type_dims(name:str)TensorType[source]#

Returns the tensor type of this tensor.This property is used to define a key used to cache a jitted function.Same keys keys means same ONNX graph.Different keys usually means same ONNX graph but differentinput shapes.

Parameters:

name – name of the constraint

propertyvalue:OrtValue#

Returns the value of this tensor as a numpy array.

merge_ort_profile#

onnx_array_api.ort.ort_profile.merge_ort_profile(prof1:DataFrame,prof2:DataFrame,suffixes:Tuple[str,str]=('base','opti'),by_column:str|None=None)Tuple[DataFrame,DataFrame][source]#

Merges two profiles produced by functionort_profile.

Parameters:
  • prof1 – first profile

  • prof2 – second profile

  • suffixes – used by pandas merge

  • by_column – the second profile merged by input, output shapes and typesplus an additional column, usuallyNone,‘idx’ or‘op_name’

Returns:

merged profiles

ort_profile#

onnx_array_api.ort.ort_profile.ort_profile(filename_or_bytes:str|bytes|ModelProto,feeds:Dict[str,ndarray],sess_options:Any|None=None,disable_optimization:bool=False,repeat:int=10,as_df:bool=True,providers:List[str]|None=None,first_it_out:bool=False,agg:bool=False,agg_op_name:bool=False,**kwargs)List|DataFrame[source]#

Profiles the execution of an onnx graph with onnxruntime.

Parameters:
  • filename_or_bytes – filename or bytes

  • feeds – inputs, dictionary of numpy arrays

  • sess_options – instance ofonnxruntime.SessionOptions

  • disable_optimization – disable onnxruntime optimization

  • repeat – number of times to run the inference

  • as_df – returns the

  • providers – list of providers to use when initializing the inference session,if None, the default value is[“CPUExecutionProvider”]

  • first_it_out – if aggregated, leaves the first iteration out

  • agg – aggregate by event

  • agg_op_name – aggregate on operator name or operator index

  • kwargs – additional parameters when initializing the inference session

Returns:

DataFrame or dictionary

On this page

[8]ページ先頭

©2009-2025 Movatter.jp