Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
/tvmPublic

Open deep learning compiler stack for cpu, gpu and specialized accelerators

License

NotificationsYou must be signed in to change notification settings

apache/tvm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Open Deep Learning Compiler Stack

Documentation |Contributors |Community |Release Notes

Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between theproductivity-focused deep learning frameworks and the performance- and efficiency-focused hardware backends.TVM works with deep learning frameworks to provide end-to-end compilation for different backends.

License

TVM is licensed under theApache-2.0 license.

Getting Started

Check out theTVM Documentation site for installation instructions, tutorials, examples, and more.TheGetting Started with TVM tutorial is a greatplace to start.

Contribute to TVM

TVM adopts the Apache committer model. We aim to create an open-source project maintained and owned by the community.Check out theContributor Guide.

History and Acknowledgement

TVM started as a research project for deep learning compilation.The first version of the project benefited a lot from the following projects:

  • Halide: Part of TVM's TIR and arithmetic simplification moduleoriginates from Halide. We also learned and adapted some parts of the lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.

Since then, the project has gone through several rounds of redesigns.The current design is also drastically different from the initial design, following thedevelopment trend of the ML compiler community.

The most recent version focuses on a cross-level design with TensorIR as the tensor-level representationand Relax as the graph-level representation and Python-first transformations.The project's current design goal is to make the ML compiler accessible by enabling mosttransformations to be customizable in Python and bringing a cross-level representation that can jointlyoptimize computational graphs, tensor programs, and libraries. The project is also a foundationinfra for building Python-first vertical compilers for domains, such as LLMs.


[8]ページ先頭

©2009-2025 Movatter.jp