Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

PyTorch ONNX exporter

Justin Chu edited this pageJan 28, 2025 ·78 revisions

Page Maintainers:@justinchuby

Documentation for developing the PyTorch-ONNX exporter (torch.onnx). For an index of all ONNX exporter related topics, seePyTorch ONNX Topics

Table of Contents

Development process

Environment setup

We highly recommend using Linux. Other platforms are not tested in PyTorch CI andare generally not used by thetorch.onnx developers.

Fork PyTorch

Forkgithub.com/pytorch/pytorch and clone your fork to your workstation.

Run

git submodule update --init --recursive --jobs 0

Build PyTorch

CUDA is not required for most development tasks. If you use CUDA, building PyTorch will probably be slower.

InstallAnaconda and activate a new environment.

Installdirenv and initialize your envrc file in the root of your PyTorch git repo:
NOTE: Please remember tohook installation after you install direnv.

# Make the local package name built by `setup.py develop` the same# as the one that's on conda.echo"export TORCH_PACKAGE_NAME=pytorch">> .envrc# Let CMake find binaries and libs installed by conda.echo'export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}'>> .envrcdirenv allow

On Azure Linux:

sudo dnf install glibc-devel kernel-headers

Then see the instructions in PyTorch'sREADME.

Optional build tips

PyTorch C++ development tips.

Use direnv for Anaconda environment selection.

Set more environment variables in your .envrc file:

# Only if you're building without CUDA.export USE_CUDA=0# Only if you're building with ccache.PATH_add /usr/lib/ccache# Needed for older compilers or conda compilersexport LDFLAGS='-lrt'# Build with debug symbols.export DEBUG=1

Install additional dependencies

Install the dependencies required for development and to run CI checks locally.

pip install expecttest pytest parameterized flake8 hypothesis pytest-cov pytest-xdist pytest-subtest pylint lintrunner ghstack beartypelintrunner init

Read more about:

  1. lintrunner (required): Run all the linters, and ensures consistency between the CI and local development environments.
  2. ghstack (optional): Conveniently submit stacks of diffs to GitHub as separate pull requests.NOTE:GitLens's interactive rebase feature comes in handy with ghstack.
  3. To recover your branch from ghstack:ghstack checkout github_link_to_pr

ONNX and ONNX Runtime

pip install onnxruntime onnx

TorchVision

The ONNX tests depend on torchvision.This is tricky because TorchVision depends on PyTorch, but we don't want ourpackage manager to install PyTorch, we want to use our locally built one.

# If you're not using CUDA, use the command below. If you are, see https://pytorch.org/get-started/locally/pip install --upgrade --no-deps --pre torchvision --extra-index-url https://download.pytorch.org/whl/nightly/cpu# manually install torchvision deps

Sanity check

You should be able to run these commands successfully:

python setup.py developpytest -svk test_arithmetic_prim_long test/onnx/test_pytorch_onnx_onnxruntime.py

And this should fail:

echo"assert False">> torch/onnx/utils.pypytest -svk test_arithmetic_prim_long test/onnx/test_pytorch_onnx_onnxruntime.pygit restore torch/onnx/utils.py

If the second command succeeds, then probably python is finding a PyTorch that was installed viaconda orpip, not the one that was built from source bypython setup.py develop.

VS Code

Recommended settings and extensions

You can place this recommendedsettings.json under.vscode/

{// Always remove trailing whitespaces"files.trimTrailingWhitespace":true,"files.insertFinalNewline":true,"files.trimFinalNewlines":true,"[python]": {"editor.tabSize":4,// Set to true to auto sort imports"editor.codeActionsOnSave": {"source.organizeImports":false        },"editor.rulers": [88],    },// Enable Python linting and Pylance type checking"python.analysis.typeCheckingMode":"basic","python.formatting.provider":"black","python.sortImports.args": ["--profile","black"],"python.linting.enabled":true,"python.linting.flake8Enabled":true,"python.linting.pydocstyleEnabled":true,"python.linting.pydocstyleArgs": ["--convention=google"],"python.linting.banditEnabled":true,"python.linting.pylintEnabled":true,"python.linting.pylintArgs": ["--disable=no-member"    ]}

Recommended extensions (you can install them in the "Extensions" tab)

{"recommendations": [// Python"ms-python.python","ms-python.vscode-pylance","njpwerner.autodocstring",// Markdown"yzhang.markdown-all-in-one","DavidAnson.vscode-markdownlint",// Coding style"shardulm94.trailing-spaces",// Linting display"usernamehw.errorlens","igorsbitnev.error-gutters","ryanluker.vscode-coverage-gutters",// Github review integration"GitHub.vscode-pull-request-github","eamodio.gitlens",// Show changed files between branches"letmaik.git-tree-compare",  ]}

If you use error lens, I recommend the following settings

{"errorLens.excludeBySource": ["cSpell"// Exclude noisy spelling errors    ],"errorLens.followCursor":"closestProblem","errorLens.fontSize":"0.9em",// Smaller unintrusive messages"errorLens.followCursorMore":3,// Hide errors too far away from the cursor}

Debugging withgdb

You can set up VS Code to rungdb and set breakpoints when debugging c++ code. Inlaunch.json add the configuration

// ..."configurations": [    {"name":"(gdb) Launch","type":"cppdbg","request":"launch","program":"<path to python bin>","args": ["-m","pytest","<test file and test name in pytest format>"      ],"stopAtEntry":false,"cwd":"path/to/repo/of/pytorch","environment": [],"externalConsole":false,"MIMode":"gdb","setupCommands": [        {"description":"Enable pretty-printing for gdb","text":"-enable-pretty-printing","ignoreFailures":true        },        {"description":"Set Disassembly Flavor to Intel","text":"-gdb-set disassembly-flavor intel","ignoreFailures":true        }      ]    },  ]

You can then set breakpoints in the c++ source and run the debugger in VS Code.

Pull requests

PRs should be opened directly against main. PRs can be directly merged into main long as it satisfies theONNX merge rule:

  • Approved by one of torch.onnx developers listed inapproved_by section.
  • All modified files fall under thepatterns section.

Pay special attention to the following GitHub checks:

  • Has "onnx" in the name, which runs ONNX related tests.
  • Has "Lint" in the name, which does code format checks.

Regarding other failing checks: if you are certain the failure is unrelated to your change, try rebasing on main. Often these failures are caused by a branch being out of sync with main.You can ignore the failing check if it is a regression in main. This can be verified by checking if main is also failing fromCI HUD.

To merge your pull request, comment on the PR@pytorchbot merge. (docBot commands)

If you make changes to non-ONNX related code, i.e. files outside ofONNX merge rule, please note the PR will require additional reviews from people outside of torch.onnx developers, and will take a longer process to merge into main. In this case, pytorchbot will not be able to merge the pull request. It will leave a comment like "Merge failed due to PR XXX does not match merge rules". Please label the pull request withonnx-needs-import.

SeeGitHub pull request workflow.

Adhere toGoogle's Code Review Developer Guide andPyTorch Code review values

Tests

Running all the tests locally takes a very long time, so generally you should run a few tests locally and rely onGitHub CI checks for comprehensive testing.We highly recommend usingpytest to run tests selectively.Note that you should usepython -m pytest rather than callingpytest directly to make sure it uses your locallybuilt version of PyTorch.

Most relevant tests are intest/onnx/.

The most used test file istest_pytorch_onnx_onnxruntime.py. The tests in this file generally:

  • Define a subclass oftorch.nn.Module.
  • Define some inputs.
  • Callself.run_test() with the instantiated module and inputs.

run_test() converts the module to ONNX and compares the output between PyTorch and ONNX Runtime.

Tests added toTestONNXRuntime are automatically defined for all supported opset versions. Use the-k option in pytest to run the test you want.

For example:

# run the `test_quantized_arithmetic_qfunctional` testpython -m pytest test/onnx/test_pytorch_onnx_onnxruntime.py -k test_quantized_arithmetic_qfunctional

An example of adding unit tests for a new symbolic function:Add binary_cross_entropy_with_logits op

You can usepytest to run tests in parallel and generate a coverage report.

python -m pytest -n auto --cov --cov-report"xml:test/coverage.xml" test/onnx/test_pytorch_onnx_onnxruntime.py

Dynamo exporter

Show diagnostics

Set the environment variableTORCH_LOGS="onnx_diagnostics" to capture detailed diagnostics.

Links

Relevant parts of PyTorch repo

Decomposition and pre-dispatch

https://github.com/pytorch/pytorch/issues/116684

Pre-dispatch may skip functionalization

Features

Quantized model export

To support quantized model export, we need to unpack the quantized tensor inputs and the PackedParam weights (https://github.com/pytorch/pytorch/pull/69232). We construct throughTupleConstruct to have a 1-to-1 input mapping,so that we can usereplaceAllUsesWith API for its successors. In addition, we support quantized namespace export, and the developers can add more symbolics for quantized operators conveniently in the current framework.

Test dependencies

Can be updated inhttps://github.com/pytorch/pytorch/blob/main/.ci/docker/common/install_onnx.sh

I would love to contribute to PyTorch!

Clone this wiki locally

[8]ページ先頭

©2009-2025 Movatter.jp