Movatterモバイル変換


[0]ホーム

URL:


ContentsMenuExpandLight modeDark modeAuto light/dark, in light modeAuto light/dark, in dark modeSkip to content
onnx-array-api 0.3.1 documentation
Logo
onnx-array-api 0.3.1 documentation

Contents

More

Back to top

Note

Go to the endto download the full example code.

First examples with onnx-array-api

This demonstrates an easy case withonnx-array-api.It shows how a function can be easily converted intoONNX.

A loss function from numpy to ONNX

The first example takes a loss function and converts it into ONNX.

importnumpyasnpfromonnx_array_api.npximportabsolute,jit_onnxfromonnx_array_api.plotting.text_plotimportonnx_simple_text_plot

The function looks like a numpy function.

defl1_loss(x,y):returnabsolute(x-y).sum()

The function needs to be converted into ONNX with function jit_onnx.jitted_l1_loss is a wrapper. It intercepts all calls to l1_loss.When it happens, it checks the input types and creates thecorresponding ONNX graph.

jitted_l1_loss=jit_onnx(l1_loss)

First execution and conversion to ONNX.The wrapper caches the created onnx graph.It reuses it if the input types and the number of dimension are the same.It creates a new one otherwise and keep the old one.

x=np.array([[0.1,0.2],[0.3,0.4]],dtype=np.float32)y=np.array([[0.11,0.22],[0.33,0.44]],dtype=np.float32)res=jitted_l1_loss(x,y)print(res)
0.09999999

The ONNX graph can be accessed the following way.

print(onnx_simple_text_plot(jitted_l1_loss.get_onnx()))
opset: domain='' version=18input: name='x0' type=dtype('float32') shape=['', '']input: name='x1' type=dtype('float32') shape=['', '']Sub(x0, x1) -> r__0  Abs(r__0) -> r__1    ReduceSum(r__1, keepdims=0) -> r__2output: name='r__2' type=dtype('float32') shape=None

We can also define a more complex loss by computing L1 loss onthe first column and L2 loss on the seconde one.

defl1_loss(x,y):returnabsolute(x-y).sum()defl2_loss(x,y):return((x-y)**2).sum()defmyloss(x,y):returnl1_loss(x[:,0],y[:,0])+l2_loss(x[:,1],y[:,1])jitted_myloss=jit_onnx(myloss)x=np.array([[0.1,0.2],[0.3,0.4]],dtype=np.float32)y=np.array([[0.11,0.22],[0.33,0.44]],dtype=np.float32)res=jitted_myloss(x,y)print(res)print(onnx_simple_text_plot(jitted_myloss.get_onnx()))
0.042opset: domain='' version=18input: name='x0' type=dtype('float32') shape=['', '']input: name='x1' type=dtype('float32') shape=['', '']Constant(value=[1]) -> cst__0Constant(value=[2]) -> cst__1Constant(value=[1]) -> cst__2  Slice(x0, cst__0, cst__1, cst__2) -> r__12Constant(value=[1]) -> cst__3Constant(value=[2]) -> cst__4Constant(value=[1]) -> cst__5  Slice(x1, cst__3, cst__4, cst__5) -> r__14Constant(value=[0]) -> cst__6Constant(value=[1]) -> cst__7Constant(value=[1]) -> cst__8  Slice(x0, cst__6, cst__7, cst__8) -> r__16Constant(value=[0]) -> cst__9Constant(value=[1]) -> cst__10Constant(value=[1]) -> cst__11  Slice(x1, cst__9, cst__10, cst__11) -> r__18Constant(value=[1]) -> cst__13  Squeeze(r__12, cst__13) -> r__20Constant(value=[1]) -> cst__15  Squeeze(r__14, cst__15) -> r__21    Sub(r__20, r__21) -> r__24Constant(value=[1]) -> cst__17  Squeeze(r__16, cst__17) -> r__22Constant(value=[1]) -> cst__19  Squeeze(r__18, cst__19) -> r__23    Sub(r__22, r__23) -> r__25      Abs(r__25) -> r__28        ReduceSum(r__28, keepdims=0) -> r__30Constant(value=2) -> r__26  CastLike(r__26, r__24) -> r__27    Pow(r__24, r__27) -> r__29      ReduceSum(r__29, keepdims=0) -> r__31        Add(r__30, r__31) -> r__32output: name='r__32' type=dtype('float32') shape=None

Eager mode

importnumpyasnpfromonnx_array_api.npximportabsolute,eager_onnxdefl1_loss(x,y):"""    err is a type inheriting from    :class:`EagerTensor <onnx_array_api.npx.npx_tensors.EagerTensor>`.    It needs to be converted to numpy first before any display.    """err=absolute(x-y).sum()print(f"l1_loss={err.numpy()}")returnerrdefl2_loss(x,y):err=((x-y)**2).sum()print(f"l2_loss={err.numpy()}")returnerrdefmyloss(x,y):returnl1_loss(x[:,0],y[:,0])+l2_loss(x[:,1],y[:,1])

Eager mode is enabled by functioneager_onnx.It intercepts all calls tomy_loss. On the first call,it replaces a numpy array by a tensor corresponding to theselected runtime, here numpy as well throughEagerNumpyTensor.

eager_myloss=eager_onnx(myloss)x=np.array([[0.1,0.2],[0.3,0.4]],dtype=np.float32)y=np.array([[0.11,0.22],[0.33,0.44]],dtype=np.float32)

First execution and conversion to ONNX.The wrapper caches many Onnx graphs corresponding tosimple opeator, (+,-,/,*, …), reduce functions,any other function from the API.It reuses it if the input types and the number of dimension are the same.It creates a new one otherwise and keep the old ones.

res=eager_myloss(x,y)print(res)
l1_loss=0.03999999910593033l2_loss=0.0019999991636723280.042

There is no ONNX graph to show. Every operationis converted into small ONNX graphs.

Total running time of the script: (0 minutes 0.041 seconds)

Gallery generated by Sphinx-Gallery

On this page

[8]ページ先頭

©2009-2025 Movatter.jp