Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

TorchFix - a linter for PyTorch-using code with autofix support

License

NotificationsYou must be signed in to change notification settings

pytorch-labs/torchfix

Repository files navigation

PyPI

TorchFix is a Python code static analysis tool - a linter with autofix capabilities -for users of PyTorch. It can be used to find and fix issues like usage of deprecatedPyTorch functions and non-public symbols, and to adopt PyTorch best practices in general.

TorchFix is built uponhttps://github.com/Instagram/LibCST - a library to manipulatePython concrete syntax trees. LibCST enables "codemods" (autofixes) in addition toreporting issues.

TorchFix can be used as a Flake8 plugin (linting only) or as a standaloneprogram (with autofix available for a subset of the lint violations).

Warning

Currently TorchFix is in abeta version stage, so there are still a lot of roughedges and many things can and will change.

Installation

To install the latest code from GitHub, clone/downloadhttps://github.com/pytorch-labs/torchfix and runpip install .inside the directory.

To install a release version from PyPI, runpip install torchfix.

Usage

After the installation, TorchFix will be available as a Flake8 plugin, so runningFlake8 normally will run the TorchFix linter.

To see only TorchFix warnings without the rest of the Flake8 linters, you can runflake8 --isolated --select=TOR0,TOR1,TOR2

TorchFix can also be run as a standalone program:torchfix .Add--fix parameter to try to autofix some of the issues (the files will be overwritten!)To see some additional debug info, add--show-stderr parameter.

Caution

Please keep in mind that autofix is a best-effort mechanism. Given the dynamic nature of Python,and especially the beta version status of TorchFix, it's very difficult to havecertainty when making changes to code, even for the seemingly trivial fixes.

Warnings for issues with codes starting with TOR0, TOR1, and TOR2 are enabled by default.Warnings with other codes may be too noisy, so not enabled by default.To enable them, use standard flake8 configuration options for the plugin mode or usetorchfix --select=ALL . for the standalone mode.

Reporting problems

If you encounter a bug or some other problem with TorchFix, please file an issue onhttps://github.com/pytorch-labs/torchfix/issues.

Rule Code Assignment Policy

New rule codes are assigned incrementally across the following categories:

  • TOR0XX, TOR1XX: General-purposetorch functionality.
  • TOR2XX: Domain-specific rules, such as TorchVision.
  • TOR4XX: Noisy rules that are disabled by default.
  • TOR9XX: Internal rules specific forpytorch/pytorch repo, other users should not use these.

TOR0, TOR1 and TOR2 are enabled by default.

Rules

TOR001 Use of removed function

torch.solve

This function was deprecated since PyTorch version 1.9 and is now removed.

torch.solve is deprecated in favor oftorch.linalg.solve.torch.linalg.solve has its arguments reversed and does not return the LU factorization.

To get the LU factorization seetorch.lu, which can be used withtorch.lu_solve ortorch.lu_unpack.

X = torch.solve(B, A).solution should be replaced withX = torch.linalg.solve(A, B).

torch.symeig

This function was deprecated since PyTorch version 1.9 and is now removed.

torch.symeig is deprecated in favor oftorch.linalg.eigh.

The default behavior has changed from using the upper triangular portion of the matrix by default to using the lower triangular portion.

L,_=torch.symeig(A,upper=upper)

should be replaced with

L=torch.linalg.eigvalsh(A,UPLO='U'ifupperelse'L')

and

L,V=torch.symeig(A,eigenvectors=True)

should be replaced with

L,V=torch.linalg.eigh(A,UPLO='U'ifupperelse'L')

TOR002 Likely typorequire_grad in assignment. Did you meanrequires_grad?

This is a common misspelling that can lead to silent performance issues.

TOR003 Please passuse_reentrant explicitly tocheckpoint

The default value of theuse_reentrant parameter intorch.utils.checkpoint is being changedfromTrue toFalse. In the meantime, the value needs to be passed explicitly.

See thisforum postfor details.

TOR004 Import of removed function

SeeTOR001.

TOR101 Use of deprecated function

torch.nn.utils.weight_norm

This function is deprecated. Usetorch.nn.utils.parametrizations.weight_normwhich uses the modern parametrization API. The newweight_norm is compatiblewithstate_dict generated from oldweight_norm.

Migration guide:

  • The magnitude (weight_g) and direction (weight_v) are now expressedasparametrizations.weight.original0 andparametrizations.weight.original1respectively.

  • To remove the weight normalization reparametrization, usetorch.nn.utils.parametrize.remove_parametrizations.

  • The weight is no longer recomputed once at module forward; instead, it willbe recomputed on every access. To restore the old behavior, usetorch.nn.utils.parametrize.cached before invoking the modulein question.

torch.backends.cuda.sdp_kernel

This function is deprecated. Use thetorch.nn.attention.sdpa_kernel context manager instead.

Migration guide:Each boolean input parameter (defaulting to true unless specified) ofsdp_kernel corresponds to aSDPBackened. If the input parameter is true, the corresponding backend should be added to the input list ofsdpa_kernel.

torch.chain_matmul

This function is deprecated in favor oftorch.linalg.multi_dot.

Migration guide:multi_dot accepts a list of two or more tensors whereaschain_matmul accepted multiple tensors as input arguments. For migration, convert the multiple tensors in argument ofchain_matmul into a list of two or more tensors formulti_dot.

Example: Replacetorch.chain_matmul(a, b, c) withtorch.linalg.multi_dot([a, b, c]).

torch.cholesky

torch.cholesky() is deprecated in favor oftorch.linalg.cholesky().

Migration guide:

  • L = torch.cholesky(A) should be replaced withL = torch.linalg.cholesky(A).
  • L = torch.cholesky(A, upper=True) should be replaced withL = torch.linalg.cholesky(A).mH

torch.qr

torch.qr() is deprecated in favor oftorch.linalg.qr().

Migration guide:

  • The usageQ, R = torch.qr(A) should be replaced withQ, R = torch.linalg.qr(A).
  • The boolean parametersome oftorch.qr is replaced with a string parametermode intorch.linalg.qr. The corresponding change in usage is fromQ, R = torch.qr(A, some=False) toQ, R = torch.linalg.qr(A, mode="complete").

torch.range

The functiontorch.range() is deprecated as its usage is incompatible with Python's builtin range. Instead, usetorch.arange() as it produces values in[start, end).

Migration guide:

  • torch.range(start, end) produces values in the range of[start, end]. Buttorch.arange(start, end) produces values in[start, end). For step size of 1, migrate usage fromtorch.range(start, end, 1) totorch.arange(start, end+1, 1).

TOR102torch.load withoutweights_only parameter is unsafe.

Explicitly setweights_only to False only if you trust the data you load and full pickle functionality is needed, otherwise setweights_only=True.

TOR103 Import of deprecated function

SeeTOR101.

License

TorchFix is BSD License licensed, as found in the LICENSE file.

About

TorchFix - a linter for PyTorch-using code with autofix support

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Contributors15

Languages


[8]ページ先頭

©2009-2025 Movatter.jp