Rate this Page

File library.h#

Parent directory (torch)

This header provides an API for extending PyTorch’s core library of operators with user defined operators and data types.

Definition (torch/library.h)#

Detailed Description#

This API can be used in a few ways:

  • You can define new custom operators and classes withDefine TORCH_LIBRARY, making them available for use in both eager Python as well as in TorchScript. This API is modeled off of pybind11’sPYBIND11_MODULE macro, as the provided functionality is similar (pybind11 lets you bind C++ to Python only;torch/library.h lets you bind C++ simultaneously to Python and TorchScript).

  • You can override existing operators withDefine TORCH_LIBRARY_IMPL, providing a new implementation for these operators for a custom backend (e.g., XLA). When you pass operators with tensors of your custom backend, your overridden implementations will be called instead of the standard implementations.

  • You can use both capabilities at the same time, allowing you to write custom operators that register CPU/CUDA/Autograd implementations without having to write the boilerplate conditionals yourself.

For a tutorial style introduction to the library API, check out the Extending TorchScript with Custom C++ Operators tutorial.

//Definealibrarywhoseoperatorsliveinthenamespace’myops’.//Youmustdefinealloftheoperatorsforthislibraryin//thisnamespace.TORCH_LIBRARY(myops,m){//Defineaoperatorwithexactlyoneimplementationforallbackends.m.def(“add(Tensorself,Tensorother)->Tensor”,&add_impl);

//Defineaschemaforanoperator,butprovidenoimplementation//(usethissyntaxifyouwanttousethedispatcher)m.def(“mul(Tensorself,Tensorother)->Tensor”);

//Provideanimplementationforadefinedoperator(youcan//providemultiple;oneperbackend).Thedispatchertakescareof//callingthecorrectimplementationdependingonifwegetaCPU//tensororaCUDAtensorm.impl(“mul”,torch::kCPU,&mul_cpu_impl);m.impl(“mul”,torch::kCUDA,&mul_cuda_impl);}

//Defineimplementationsforoperatorsforanon-standardbackend,//e.g.,XLA(validvaluesareentriesofDispatchKey).Thiscan//beusedtodefineoperatorsinadifferentfilethantheinitial//TORCH_LIBRARYdefinition(e.g.,ifitisinanexternallibrary)TORCH_LIBRARY_IMPL(myops,XLA,m){m.impl(“mul”,&mul_xla_impl);}

Includes#

  • ATen/core/dispatch/Dispatcher.h

  • ATen/core/enum_tag.h

  • ATen/core/op_registration/infer_schema.h

  • ATen/core/op_registration/op_allowlist.h

  • ATen/core/op_registration/op_registration.h

  • c10/core/DispatchKey.h

  • torch/csrc/jit/frontend/function_schema_parser.h

  • torch/custom_class.h (File custom_class.h)

Included By#

Namespaces#

Classes#

Enums#

Functions#

Defines#