- Notifications
You must be signed in to change notification settings - Fork5.7k
Convert Machine Learning Code Between Frameworks
License
ivy-llc/ivy
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Website | Docs | Demos | Design | FAQ |
Ivy enables you to:
- Convert ML models, tools and libraries between frameworks while maintaining complete functionality using
ivy.transpile
- Create optimized graph-based models and functions in any native framework (PyTorch, TensorFlow, etc..) with
ivy.trace_graph
The easiest way to set up Ivy is to install it usingpip:
pip install ivy
Docker Image
You can pull the Docker image for Ivy from:
docker pull ivyllc/ivy:latest
From Source
You can also install Ivy from source if you want to take advantage ofthe latest changes, but we can't ensure everything will work asexpected 😅
git clone https://github.com/ivy-llc/ivy.gitcd ivypip install --user -e.
If you want to set up testing and various frameworks it's probably bestto check out theSetting Uppage, where OS-specific and IDE-specific instructions are available!
These are the frameworks thativy.transpile
currently supports conversions from and to.We're working hard on adding support for more frameworks, let us know onDiscord if there are source/target frameworks that would be useful for you!
Framework | Source | Target |
---|---|---|
PyTorch | ✅ | 🚧 |
TensorFlow | 🚧 | ✅ |
JAX | 🚧 | ✅ |
NumPy | 🚧 | ✅ |
Ivy's transpiler allows you convert code between different ML frameworks. Have a look at ourQuickstart notebook to get a brief idea of the features!
Beyond that, based on the frameworks you want to convert code between, there are a few moreexamples further down this page 👇 which contain a number of models and libraries transpiled between PyTorch, JAX, TensorFlow and NumPy.
Here's some examples, to help you get started using Ivy! Theexamples page also features a wide range ofdemos and tutorials showcasing some more use cases for Ivy.
Transpiling any code from one framework to another
importivyimporttorchimporttensorflowastfdeftorch_fn(x):a=torch.mul(x,x)b=torch.mean(x)returnx*a+btf_fn=ivy.transpile(torch_fn,source="torch",target="tensorflow")tf_x=tf.convert_to_tensor([1.,2.,3.])ret=tf_fn(tf_x)
Tracing a computational graph of any code
importivyimporttorchdeftorch_fn(x):a=torch.mul(x,x)b=torch.mean(x)returnx*a+btorch_x=torch.tensor([1.,2.,3.])graph=ivy.trace_graph(jax_fn,to="torch",args=(torch_x,))ret=graph(torch_x)
Let's take a look at how Ivy works as a transpiler in more detail to get an idea of why and where to use it.
When is Ivy's transpiler useful?
If you want to use building blocks published in other frameworks (neuralnetworks, layers, array computing libraries, training pipelines...),you want to integrate code developed in various frameworks, or maybestraight up migrate code from one framework to another or even between versions of the same framework, the transpiler isdefinitely the tool for the job! You can use the converted code justas if it was code originally developed in that framework, applyingframework-specific optimizations or tools, instantly exposing yourproject to all of the unique perks of a different framework.
Ivy's transpiler allows you to use code from any other framework (orfrom any other version of the same framework!) in your own code, by justadding one line of code.
This way, Ivy makes all ML-related projects available for you,independently of the framework you want to use to research, develop, ordeploy systems. Feel free to head over to the docs for the full APIreference, but the functions you'd most likely want to use are:
# Converts framework-specific code to a target framework of choice. See usage in the documentationivy.transpile()# Traces an efficient fully-functional graph from a function, removing all wrapping and redundant code. See usage in the documentationivy.trace_graph()
importivyimporttorchimporttensorflowastfdeftorch_fn(x):x=torch.abs(x)returntorch.sum(x)x1=torch.tensor([1.,2.])x1=tf.convert_to_tensor([1.,2.])# Transpilation happens eagerlytf_fn=ivy.transpile(test_fn,source="torch",target="tensorflow")# tf_fn is now tensorflow code and runs efficientlyret=tf_fn(x1)
importivyimportkorniaimporttensorflowastfx2=tf.random.normal((5,3,4,4))# Module is provided -> transpilation happens lazilytf_kornia=ivy.transpile(kornia,source="torch",target="tensorflow")# The transpilation is initialized here, and this function is converted to tensorflowret=tf_kornia.color.rgb_to_grayscale(x2)# Transpilation has already occurred, the tensorflow function runs efficientlyret=tf_kornia.color.rgb_to_grayscale(x2)
If you pass the necessary arguments for function tracing, the graph tracing step willhappen instantly (eagerly). Otherwise, the graph tracingwill happen only when the returned function is first invoked.
importivyimportjaxivy.set_backend("jax")# Simple JAX function to transpiledeftest_fn(x):returnjax.numpy.sum(x)x1=ivy.array([1.,2.])
# Arguments are available -> tracing happens eagerlyeager_graph=ivy.trace_graph(test_fn,to="jax",args=(x1,))# eager_graph now runs efficientlyret=eager_graph(x1)
# Arguments are not available -> tracing happens lazilylazy_graph=ivy.trace_graph(test_fn,to="jax")# The traced graph is initialized, tracing will happen hereret=lazy_graph(x1)# Tracing has already happend, traced graph runs efficientlyret=lazy_graph(x1)
If you want to learn more, you can find more information in theIvy asa transpiler section of thedocs!
You can find Ivy's documentation on theDocs page, which includes:
- Motivation: This contextualizes the problem Ivy is trying to solve by going over
- The currentML Explosion.
- Explaining why it is importantto solve this problem.
- Related Work: Which paints a picture of the role Ivy plays in the ML stack, comparing it to other existing solutions in terms of functionalities and abstraction level.
- Design: A user-focused guide about the design decision behind the architecture and the main building blocks of Ivy.
- Deep Dive: Which delves deeper into the implementation details of Ivy and is oriented towards potential contributors to the code base.
We believe that everyone can contribute and make a difference. Whetherit's writing code, fixing bugs, or simply sharing feedback,your contributions are definitely welcome and appreciated 🙌
Check out all of ourOpen Tasks,and find out more info in ourContributing guidein the docs! Or to immediately dive into a useful task, look for any failing tests on ourTest Dashboard!
Join our growing community on a mission to make conversions between frameworks simple and accessible to all!Whether you are a seasoned developer or just starting out, you'll find a place here! Join the Ivy community onourDiscord 👾 server, which is theperfect place to ask questions, share ideas, and get help from bothfellow developers and the Ivy Team directly.
See you there!
If you use Ivy for your work, please don't forget to give proper creditby including the accompanyingpaper📄 in your references. It's a small way to show appreciation and helpto continue to support this and other open source projects 🙌
@article{lenton2021ivy, title={Ivy: Templated deep learning for inter-framework portability}, author={Lenton, Daniel and Pardo, Fabio and Falck, Fabian and James, Stephen and Clark, Ronald}, journal={arXiv preprint arXiv:2102.02886}, year={2021}}
About
Convert Machine Learning Code Between Frameworks