Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Releases: pytorch/pytorch

PyTorch 2.7.1 Release, bug fix release

04 Jun 18:13
e2d141d
This commit was created on GitHub.com and signed with GitHub’sverified signature.
GPG key ID:B5690EEEBB952194
Verified
Learn about vigilant mode.
Compare
Choose a tag to compare
Loading

This release is meant to fix the following issues (regressions / silent correctness):

Torch.compile

Fix Excessive cudagraph re-recording for HF LLM models (#152287)
Fix torch.compile on some HuggingFace models (#151154)
Fix crash due to Exception raised inside torch.autocast (#152503)
Improve Error logging in torch.compile (#149831)
Mark mutable custom operators as cacheable in torch.compile (#151194)
Implement workaround for a graph break with older version einops (#153925)
Fix an issue with tensor.view(dtype).copy_(...) (#151598)

Flex Attention

Fix assertion error due to inductor permuting inputs to flex attention (#151959)
Fix performance regression on nanogpt speedrun (#152641)

Distributed

Fix extra CUDA context created by barrier (#149144)
Fix an issue related to Distributed Fused Adam in Rocm/APEX when using nccl_ub feature (#150010)
Add a workaround random hang in non-blocking API mode in NCCL 2.26 (#154055)

MacOS

Fix MacOS compilation error with Clang 17 (#151316)
Fix binary kernels produce incorrect results when one of the tensor arguments is from a wrapped scalar on MPS devices (#152997)

Other

Improve PyTorch Wheel size due to introduction of addition of 128 bit vectorization (#148320) (#152396)
Fix fmsub function definition (#152075)
Fix Floating point exception in torch.mkldnn_max_pool2d (#151848)
Fix abnormal inference output with XPU:1 device (#153067)
Fix Illegal Instruction Caused by grid_sample on Windows (#152613)
Fix ONNX decomposition does not preserve custom CompositeImplicitAutograd ops (#151826)
Fix error with dynamic linking of libgomp library (#150084)
Fix segfault in profiler with Python 3.13 (#153848)

Assets3
Loading
akihironitta, D0n-A, NeilTohno, Anuvadak, alexchenfeng, github-actions[bot], Dengda98, Dinesh-Mareedu, HuayuChen2004, 3294734448, and 17 more reacted with thumbs up emojiakihironitta, mihaimoga, github-actions[bot], hassonofer, wanderingeek, okoge-kaz, zangyook, and farazkh80 reacted with hooray emojiakihironitta, 1taroh, Brensom, and VitthalGupta reacted with heart emojiAjayKMehta, akihironitta, uygarpolat, github-actions[bot], hossein-ahmadzadeh, and farazkh80 reacted with rocket emoji
36 people reacted

PyTorch 2.7.0 Release

23 Apr 16:16
1341794
This commit was created on GitHub.com and signed with GitHub’sverified signature.
GPG key ID:B5690EEEBB952194
Verified
Learn about vigilant mode.
Compare
Choose a tag to compare
Loading

PyTorch 2.7.0 Release Notes

Highlights

BetaPrototype
Torch.Compile support for Torch Function ModesNVIDIA Blackwell Architecture Support
Mega CachePyTorch Native Context Parallel
Enhancing Intel GPU Acceleration
FlexAttention LLMfirst token processing on X86 CPUs
FlexAttention LLMthroughput mode optimization on X86 CPUs
Foreach Map
Flex Attention for Inference
Prologue Fusion Support in Inductor

For more details about these highlighted features, you can look at therelease blogpost.
Below are the full release notes for this release.

Tracked Regressions

NCCL init hits CUDA failure 'invalid argument' on 12.2 driver

Some users with 12.2 CUDA driver (535 version) report seeing "CUDA driver error: invalid argument" during NCCL or Symmetric Memory initialization. This issue is currently under investigation, see#150852. If you use PyTorch from source, a known workaround is to rebuild PyTorch with CUDA 12.2 toolkit. Otherwise, you can try upgrading the CUDA driver on your system.

Backwards Incompatible Changes

Dropped support for Triton < 2.2.0. Removed Support for CUDA 12.4, Anaconda in CI/CD.

C++ Extensionspy_limited_api=True is now built with-DPy_LIMITED_API (#145764)

We formally began respecting thepy_limited_api=True kwarg in 2.6 and stopped linkinglibtorch_python.so when the flag was specified, as libtorch_python.so does not guarantee using APIs from from the stable Python limited API. In 2.7, we go further by specifying the-DPy_LIMITED_API flag which will enforce that the extension is buildable with the limited API. As a result of this enforcement,custom extensions that setpy_limited_api=True but do not abide by the limited API may fail to build. For an example, see#152243.

This is strictly better behavior as it is sketchy to claim CPython agnosticism without enforcing with the flag. If you run into this issue, please ensure that the extension you are building does not use any APIs which are outside of the Python limited API, e.g.,pybind.

Changetorch.Tensor.new_tensor() to be on the given Tensor's device by default (#144958)

This function was always creating the new Tensor on the "cpu" device and will now use the same device as the current Tensor object. This behavior is now consistent with other.new_* methods.

Use Manylinux 2.28 and CXX11_ABI=1 for future released Linux wheel builds.

With Migration to manylinux_2_28 (AlmaLinux 8 based), we can no longer support OS distros with glibc2_26. These include popular Amazon Linux 2 and CentOS 7. (#143423,#146200,#148028,#148135,#148195,#148129)

torch.onnx.dynamo_export now uses the ExportedProgram logic path (#137296)

Users using thetorch.onnx.dynamo_export API may see someExportOptions become
unsupported due to an internal switch to usetorch.onnx.export(..., dynamo=True):diagnostic_options,fake_context andonnx_registry are removed/ignored byExportOptions. Onlydynamic_shapes is retained.

Users should move to use thedynamo=True option ontorch.onnx.export as
torch.onnx.dynamo_export is now deprecated. Leverage thedynamic_shapes argument intorch.onnx.export for specifying dynamic shapes on the model.

Version 2.6.0

torch.onnx.dynamo_export(model,*args,**kwargs)

Version 2.7.0

torch.onnx.export(model,args,kwargs=kwargs,dynamo=True)

Finish deprecation ofLRScheduler.print_lr() along with theverbose kwarg to the LRScheduler constructor. (#147301)

Both APIs have been deprecated since 2.2. Please useLRScheduler.get_last_lr() to access the learning rate instead.print_lr andverbose were confusing, not properly documented and were little used, as described in#99270, so we deprecated them in 2.2. Now, we complete the deprecation by removing them completely. To access and print the learning rate of a LRScheduler:

Version 2.6.0

optim= ...lrsched=torch.optim.lr_scheduler.ReduceLROnPlateau(optim,verbose=True)//lrschedwillinternallycallprint_lr()andprintthelearningrate

Version 2.7.0

optim= ...lrsched=torch.optim.lr_scheduler.ReduceLROnPlateau(optim)print(lrsched.get_last_lr())

libtorch_python.so symbols are now invisible by default on all platforms except Apple (#142214)

Previously, the symbols in libtorch_python.so were exposed with default visibility. We have transitioned to being more intentional about what we expose as public symbols for our python API in C++. After#142214, public symbols will be marked explicitly while everything else will be hidden. Some extensions using private symbols will see linker failures with this change.

Please usetorch.export.export instead ofcapture_pre_autograd_graph to export the model for pytorch 2 export quantization (#139505)

capture_pre_autograd_graph was a temporary API intorch.export. Since now we have a better longer term API:export available, we can deprecate it.

Version 2.6.0

fromtorch._exportimportcapture_pre_autograd_graphfromtorch.ao.quantization.quantize_pt2eimportprepare_pt2efromtorch.ao.quantization.quantizer.xnnpack_quantizerimport (XNNPACKQuantizer,get_symmetric_quantization_config,)quantizer=XNNPACKQuantizer().set_global(get_symmetric_quantization_config())m=capture_pre_autograd_graph(m,*example_inputs)m=prepare_pt2e(m,quantizer)

Version 2.7.0

fromtorch.exportimportexportfromtorch.ao.quantization.quantize_pt2eimportprepare_pt2e# please get xnnpack quantizer from executorch (https://github.com/pytorch/executorch/)fromexecutorch.backends.xnnpack.quantizer.xnnpack_quantizerimport (XNNPACKQuantizer,get_symmetric_quantization_config,)quantizer=XNNPACKQuantizer().set_global(get_symmetric_quantization_config())m=export(m,*example_inputs)m=prepare_pt2e(m,quantizer)

New interface fortorch.fx.passes.graph_transform_observer.GraphTransformObserver to enable Node Level provenance tracking (#144277)

We now track a mapping between the nodes in the pre-grad and post-grad graph. See the issue for an example frontend to visualize the transformations. To update yourGraphTransformObserver subclasses, instead of overridingon_node_creation andon_node_erase, there are new functionsget_node_creation_hook,get_node_erase_hook,get_node_replace_hook andget_deepcopy_hook. These are registered on theGraphModule member of theGraphTransformObserver upon entry and exit of awith block

Version 2.6.0

classMyPrintObserver(GraphTransformObserver):defon_node_creation(self,node:torch.fx.Node):print(node)

Version 2.7.0

classMyPrintObserver(GraphTransformObserver):defget_node_creation_hook(self):defhook(node:torch.fx.Node):print(node)returnhook

torch.ao.quantization.pt2e.graph_utils.get_control_flow_submodules is no longer public (#141612)

We are planning to make all functions undertorch.ao.quantization.pt2e.graph_utils private. This update marksget_control_flow_submodules as a private API. If you have to or want to continue usingget_control_flow_submodules, please make a private call by using_get_control_flow_submodules.

Example:
Version 2.6:

>>>fromtorch.ao.quantization.pt2e.graph_utilsimportget_control_flow_submodules

Version 2.7:

>>>fromtorch.ao.quantization.pt2e.graph_utilsimportget_control_flow_submodulesImportError:cannotimportname'get_control_flow_submodules'from'torch.ao.quantization.pt2e.graph_utils'>>>fromtorch.ao.quantization.pt2e.graph_utilsimport_get_control_flow_submodules# Note: Use _get_control_flow_submodules for private access

Deprecations

torch.onnx.dynamo_export is deprecated (#146425,#146639,#146923)

Users should use thedynamo=True option ontorch.onnx.export.

Version 2.6.0

torch.onnx.dynamo_export(model,*args,**kwargs)

Version 2.7.0

torch.onnx.export(model,args,kwargs=kwargs,dynamo=True)

XNNPACKQuantizer is deprecated in PyTorch and moved to ExecuTorch, please use it fromexecutorch.backends.xnnpack.quantizer.xnnpack_quantizer instead oftorch.ao.quantization.quantizer.xnnpack_quantizer. (#144940)

XNNPACKQuantizer is a quantizer for xnnpack that was added into pytorch/pytorch for initial development. Ho...

Read more
Loading
akshaytrikha, wanderingeek, logicwong, holycowdude, yuygfgg, glevv, andrey-khropov, healy-hub, Jumaron, Aunali321, and 62 more reacted with thumbs up emoji651961, heindrickdumdum0217, Achilles718611, healy-hub, wolegca, GoodCoder666, cataluna84, obitodaitu, AlirezaSR, and PyroKing39 reacted with laugh emojikmaehashi, healy-hub, Jumaron, Aunali321, akihironitta, Silv3S, IndigoW0lf, Red-Eyed, ghchris2021, hotchpotch, and 21 more reacted with hooray emojiakihironitta, gui-miotto, Red-Eyed, ghchris2021, 651961, heindrickdumdum0217, Achilles718611, healy-hub, andre-brainn, wolegca, and 8 more reacted with heart emojitmshv, ghchris2021, brian316, 651961, ErcinDedeoglu, heindrickdumdum0217, Achilles718611, LiPingYen, healy-hub, Anuvadak, and 7 more reacted with rocket emoji651961, gugarosa, wolegca, GoodCoder666, cataluna84, obitodaitu, PyroKing39, and Fitanium reacted with eyes emoji
98 people reacted

PyTorch 2.6.0 Release

29 Jan 17:18
1eba9b3
This commit was created on GitHub.com and signed with GitHub’sverified signature.
GPG key ID:B5690EEEBB952194
Verified
Learn about vigilant mode.
Compare
Choose a tag to compare
Loading
  • Highlights
  • Tracked Regressions
  • Backwards Incompatible Change
  • Deprecations
  • New Features
  • Improvements
  • Bug fixes
  • Performance
  • Documentation
  • Developers

Highlights

We are excited to announce the release of PyTorch® 2.6 (release notes)! This release features multiple improvements for PT2:torch.compile can now be used with Python 3.13; new performance-related knobtorch.compiler.set_stance; several AOTInductor enhancements. Besides the PT2 improvements, another highlight is FP16 support on X86 CPUs.

NOTE: Starting with this release we are not going to publish on Conda, please see[Announcement] Deprecating PyTorch’s official Anaconda channel for the details.

For this release the experimental Linux binaries shipped with CUDA 12.6.3 (as well as Linux Aarch64, Linux ROCm 6.2.4, and Linux XPU binaries) are built with CXX11_ABI=1 and areusing the Manylinux 2.28 build platform. If you build PyTorch extensions with custom C++ or CUDA extensions, please update these builds to use CXX_ABI=1 as well and report any issues you are seeing. For the next PyTorch 2.7 release we plan to switch all Linux builds to Manylinux 2.28 and CXX11_ABI=1, please see[RFC] PyTorch next wheel build platform: manylinux-2.28 for the details and discussion.

Also in this release as an important security improvement measure we have changed the default value forweights_only parameter oftorch.load. This is a backward compatibility-breaking change, please seethis forum post for more details.

This release is composed of 3892 commits from 520 contributors since PyTorch 2.5. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve PyTorch. More information about how to get started with the PyTorch 2-series can be found at ourGetting Started page.

BetaPrototype
torch.compiler.set_stanceImproved PyTorch user experience on Intel GPUs
torch.library.triton_opFlexAttention support on X86 CPU for LLMs
torch.compile support for Python 3.13Dim.AUTO
New packaging APIs for AOTInductorCUTLASS and CK GEMM/CONV Backends for AOTInductor
AOTInductor: minifier
AOTInductor: ABI-compatible mode code generation
FP16 support for X86 CPUs

*To see a full list of public feature submissions clickhere.

BETA FEATURES

[Beta] torch.compiler.set_stance

This feature enables the user to specify different behaviors (“stances”) thattorch.compile can take between different invocations of compiled functions. One of the stances, for example, is

“eager_on_recompile”, that instructs PyTorch to code eagerly when a recompile is necessary, reusing cached compiled code when possible.

For more information please refer to theset_stance documentation and theDynamic Compilation Control with torch.compiler.set_stance tutorial.

[Beta] torch.library.triton_op

torch.library.triton_op offers a standard way of creating custom operators that are backed by user-defined triton kernels.

When users turn user-defined triton kernels into custom operators,torch.library.triton_op allowstorch.compile to peek into the implementation, enablingtorch.compile to optimize the triton kernel inside it.

For more information please refer to thetriton_op documentation and the Using User-Defined Triton Kernels with torch.compile tutorial.

[Beta] torch.compile support for Python 3.13

torch.compile previously only supported Python up to version 3.12. Users can now optimize models withtorch.compile in Python 3.13.

[Beta] New packaging APIs for AOTInductor

A new package format, “PT2 archive”, has been introduced. This essentially contains a zipfile of all the files that need to be used by AOTInductor, and allows users to send everything needed to other environments. There is also functionality to package multiple models into one artifact, and to store additional metadata inside of the package.

For more details please see the updatedtorch.export AOTInductor Tutorial for Python runtime.

[Beta] AOTInductor: minifier

If a user encounters an error while using AOTInductor APIs, AOTInductor Minifier allows creation of a minimal nn.Module that reproduces the error.

For more information please see theAOTInductor Minifier documentation.

[Beta] AOTInductor: ABI-compatible mode code generation

AOTInductor-generated model code has dependency on Pytorch cpp libraries. As Pytorch evolves quickly, it’s important to make sure previously AOTInductor compiled models can continue to run on newer Pytorch versions, i.e. AOTInductor is backward compatible.

In order to guarantee application binary interface (ABI) backward compatibility, we have carefully defined a set of stable C interfaces in libtorch and make sure AOTInductor generates code that only refers to the specific set of APIs and nothing else in libtorch. We will keep the set of C APIs stable across Pytorch versions and thus provide backward compatibility guarantees for AOTInductor-compiled models.

[Beta] FP16 support for X86 CPUs (both eager and Inductor modes)

Float16 datatype is commonly used for reduced memory usage and faster computation in AI inference and training. CPUs like the recently launchedIntel® Xeon® 6 with P-Cores support Float16 datatype with native acceleratorAMX. Float16 support on X86 CPUs was introduced in PyTorch 2.5 as a prototype feature, and now it has been further improved for both eager mode and Torch.compile + Inductor mode, making it Beta level feature with both functionality and performance verified with a broad scope of workloads.

PROTOTYPE FEATURES

[Prototype] Improved PyTorch user experience on Intel GPUs

PyTorch user experience on Intel GPUs is further improved with simplified installation steps, Windows release binary distribution and expanded coverage of supported GPU models including the latest Intel® Arc™ B-Series discrete graphics. Application developers and researchers seeking to fine-tune, inference and develop with PyTorch models onIntel® Core™ Ultra AI PCsandIntel® Arc™ discrete graphics will now be able to directly install PyTorch with binary releases for Windows, Linux and Windows Subsystem for Linux 2.

  • Simplified Intel GPU software stack setup to enable one-click installation of the torch-xpu PIP wheels to run deep learning workloads in an out of the box fashion, eliminating the complexity of installing and activating Intel GPU development software bundles.
  • Windows binary releases for torch core, torchvision and torchaudio have been made available for Intel GPUs, and the supported GPU models have been expanded from Intel® Core™ Ultra Processors with Intel® Arc™ Graphics,Intel® Core™ Ultra Series 2 with Intel® Arc™ Graphics andIntel® Arc™ A-Series Graphics to the latest GPU hardwareIntel® Arc™ B-Series graphics.
  • Further enhanced coverage of Aten operators on Intel GPUs with SYCL* kernels for smooth eager mode execution, as well as bug fixes and performance optimizations for torch.compile on Intel GPUs.

For more information regarding Intel GPU support, please refer toGetting Started Guide.

[Prototype] FlexAttention support on X86 CPU for LLMs

FlexAttention was initially introduced in PyTorch 2.5 to provide optimized implementations for Attention variants with a flexible API. In PyTorch 2.6, X86 CPU support for FlexAttention was added through TorchInductor CPP backend. This new feature leverages and extends current CPP template abilities to support...

Read more
Loading
D0n-A, inikishev, akihironitta, davidbuterez, Forbu, Geo99pro, Dahvikiin, CrasCris, RobinKa, ErcinDedeoglu, and 44 more reacted with thumbs up emojiheindrickdumdum0217, 651961, Enigmatisms, bryanlimy, ShotaDeguchi, binbjz, lin72h, cataluna84, GoodCoder666, xingchensong, and chuckles201 reacted with laugh emojiAnuvadak, madpeh, jeongseok-meta, jjerphan, azevedoguigo, joshdavham, mihaimoga, akihironitta, mplatzer, Olney1, and 24 more reacted with hooray emojimplatzer, Olney1, PeterCalifano, Geo99pro, CrasCris, RobinKa, Naeem1144, Di-Is, heindrickdumdum0217, KaSaNaa, and 10 more reacted with heart emojipainebenjamin, Maurux01, pustam-egr, joshdavham, akihironitta, Olney1, davidbuterez, atalman, CrasCris, RobinKa, and 13 more reacted with rocket emojimrverdant13, binbjz, cataluna84, GoodCoder666, and chuckles201 reacted with eyes emoji
87 people reacted

PyTorch 2.5.1: bug fix release

29 Oct 17:58
a8d6afb
This commit was created on GitHub.com and signed with GitHub’sverified signature.
GPG key ID:B5690EEEBB952194
Verified
Learn about vigilant mode.
Compare
Choose a tag to compare
Loading

This release is meant to fix the following regressions:

  • Wheels from PyPI are unusable out of the box on PRM-based Linux distributions:#138324
  • PyPI arm64 distribution logs cpuinfo error on import:#138333
  • Crash When Using torch.compile with Math scaled_dot_product_attention in AMP Mode:#133974
  • [MPS] Internal crash due to the invalid buffer size computation if sliced API is used:#137800
  • Several issues related to CuDNN Attention:#138522

Besides the regression fixes, the release includes several documentation updates.

See release tracker#132400 for additional information.

Loading
voxlol, rino2000, Denisskas, a-gn, rhiskey, etiennelndr, carlthome, iceychris, leslie-fang-intel, ErcinDedeoglu, and 35 more reacted with thumbs up emojimihaimoga, seemethere, Denisskas, QuantumChemist, rhiskey, iceychris, ErcinDedeoglu, binbjz, 651961, wanderingeek, and 13 more reacted with hooray emojivoxlol, Denisskas, QuantumChemist, rsadwick, grib0ed0v, iceychris, syu-tan, binbjz, wanderingeek, Anuvadak, and 14 more reacted with heart emojivoxlol, Denisskas, QuantumChemist, PeaBrane, iceychris, ErcinDedeoglu, Puiching-Memory, binbjz, wanderingeek, ngdlmk, and 9 more reacted with rocket emojiPuiching-Memory, Paxsenix0, bryanlimy, binbjz, Joul1285, dhkim0225, sachithdickwella, and JoEarl reacted with eyes emoji
73 people reacted

PyTorch 2.5.0 Release, SDPA CuDNN backend, Flex Attention

17 Oct 16:26
32f585d
This commit was created on GitHub.com and signed with GitHub’sverified signature.
GPG key ID:B5690EEEBB952194
Verified
Learn about vigilant mode.
Compare
Choose a tag to compare
Loading

PyTorch 2.5 Release Notes

  • Highlights
  • Backwards Incompatible Change
  • Deprecations
  • New Features
  • Improvements
  • Bug fixes
  • Performance
  • Documentation
  • Developers
  • Security

Highlights

We are excited to announce the release of PyTorch® 2.5! This release features a new CuDNN backend for SDPA, enabling speedups by default for users of SDPA on H100s or newer GPUs. As well, regional compilation of torch.compile offers a way to reduce the cold start up time for torch.compile by allowing users to compile a repeated nn.Module (e.g. a transformer layer in LLM) without recompilations. Finally, TorchInductor CPP backend offers solid performance speedup with numerous enhancements like FP16 support, CPP wrapper, AOT-Inductor mode, and max-autotune mode.
This release is composed of 4095 commits from 504 contributors since PyTorch 2.4. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.5. More information about how to get started with the PyTorch 2-series can be found at ourGetting Started page.
As well, please check out our new ecosystem projects releases withTorchRec andTorchFix.

BetaPrototype
CuDNN backend for SDPAFlexAttention
torch.compile regional compilation without recompilationsCompiled Autograd
TorchDynamo added support for exception handling & MutableMapping typesFlight Recorder
TorchInductor CPU backend optimizationMax-autotune Support on CPU with GEMM Template
TorchInductor on Windows
FP16 support on CPU path for both eager mode and TorchInductor CPP backend
Autoload Device Extension
Enhanced Intel GPU support

*To see a full list of public feature submissions clickhere.

BETA FEATURES

[Beta] CuDNN backend for SDPA

The cuDNN "Fused Flash Attention" backend was landed fortorch.nn.functional.scaled_dot_product_attention. On NVIDIA H100 GPUs this can provide up to 75% speed-up over FlashAttentionV2. This speedup is enabled by default for all users of SDPA on H100 or newer GPUs.

[Beta]torch.compile regional compilation without recompilations

Regional compilation without recompilations, viatorch._dynamo.config.inline_inbuilt_nn_modules which default to True in 2.5+. This option allows users to compile a repeated nn.Module (e.g. a transformer layer in LLM) without recompilations. Compared to compiling the full model, this option can result in smaller compilation latencies with 1%-5% performance degradation compared to full model compilation.

See thetutorial for more information.

[Beta] TorchInductor CPU backend optimization

This feature advances Inductor’s CPU backend optimization, including CPP backend code generation and FX fusions with customized CPU kernels. The Inductor CPU backend supports vectorization of common data types and all Inductor IR operations, along with the static and symbolic shapes. It is compatible with both Linux and Windows OS and supports the default Python wrapper, the CPP wrapper, and AOT-Inductor mode.

Additionally, it extends the max-autotune mode of the GEMM template (prototyped in 2.5), offering further performance gains. The backend supports various FX fusions, lowering to customized kernels such as oneDNN for Linear/Conv operations and SDPA. The Inductor CPU backend consistently achieves performance speedups across three benchmark suites—TorchBench, Hugging Face, and timms—outperforming eager mode in 97.5% of the 193 models tested.

PROTOTYPE FEATURES

[Prototype] FlexAttention

We've introduced a flexible API that enables implementing various attention mechanisms such as Sliding Window, Causal Mask, and PrefixLM with just a few lines of idiomatic PyTorch code. This API leverages torch.compile to generate a fused FlashAttention kernel, which eliminates extra memory allocation and achieves performance comparable to handwritten implementations. Additionally, we automatically generate the backwards pass using PyTorch's autograd machinery. Furthermore, our API can take advantage of sparsity in the attention mask, resulting in significant improvements over standard attention implementations.

For more information and examples, please refer to theofficial blog post andAttention Gym.

[Prototype] Compiled Autograd

Compiled Autograd is an extension to the PT2 stack allowing the capture of the entire backward pass. Unlike the backward graph traced by AOT dispatcher, Compiled Autograd tracing is deferred until backward execution time, which makes it impervious to forward pass graph breaks, and allows it to record backward hooks into the graph.

Please refer to thetutorial for more information.

[Prototype] Flight Recorder

Flight recorder is a new debugging tool that helps debug stuck jobs. The tool works by continuously capturing information about collectives as they run. Upon detecting a stuck job, the information can be used to quickly identify misbehaving ranks/machines along with code stack traces.

For more information please refer to the followingtutorial.

[Prototype] Max-autotune Support on CPU with GEMM Template

Max-autotune mode for the Inductor CPU backend in torch.compile profiles multiple implementations of operations at compile time and selects the best-performing one. This is particularly beneficial for GEMM-related operations, using a C++ template-based GEMM implementation as an alternative to the ATen-based approach with oneDNN and MKL libraries. We support FP32, BF16, FP16, and INT8 with epilogue fusions for x86 CPUs. We’ve seen up to 7% geomean speedup on the dynamo benchmark suites and up to 20% boost in next-token latency for LLM inference.

For more information please refer to thetutorial.

[Prototype] TorchInductor CPU on Windows

Inductor CPU backend in torch.compile now works on Windows. We support MSVC (cl), clang (clang-cl) and Intel compiler (icx-cl) for Windows inductor currently.

See thetutorial for more details.

[Prototype] FP16 support on CPU path for both eager mode and TorchInductor CPP backend

Float16 is a commonly used reduced floating point type for performance improvement in neural network inference/training. Since this release, float16 for both eager and TorchInductor is supported on the CPU path.

[Prototype] Autoload Device Extension

PyTorch now supports autoloading for out-of-tree device extensions, streamlining integration by eliminating the need for manual imports. This feature, enabled through the torch.backends entrypoint, simplifies usage by ensuring seamless extension loading, while allowing users to disable it via an environment variable if needed.

See thetutorial for more information.

[Prototype] Enhanced Intel GPU support

Intel GPUs support enhancement is now available for both Intel® Data Center GPU Max Series and Intel® Client GPUs (Intel® Core™ Ultra processors with built-in Intel® Arc™ graphics and Intel® Arc™ Graphics for dGPU parts), which is to make it easier to accelerate your Machine Learning workflows on Intel GPUs in PyTorch 2.5 release. We also enabled the initial support of PyTorch on Windows for Intel® Client GPUs in this release.

  • Expanded PyTorch hardware backend support matrix to include both Intel Data Center and Client GPUs.  
  • The implementation of SYCL* kernels to enhance coverage and execution of Aten operators on Intel GPUs to boost performance in PyTorch eager mode.
  • Enhanced Intel GPU backend of torch.compile to improve inference and training performance for a wide range of deep learning workloads.

These features are available through PyTorch preview and nightly binary PIP wheels. For more information regarding Intel GPU support, please refer todocumentation.

Backwards Incompatible changes

Distributed

  • [c10d] Remove Option for ProcessGroup and Expose backend Options to reflect the correct code structure (#132931)

    • We released Dispatchable collectives in 2.0 and we will use Backend Option for Backend initialization and the PG options are not needed any more.
    • In 2.4 and before, users can do:
    # Users can pass in a basic option when creating an instance of ProcessGroupbase_pg_options=ProcessGroup.Options(backend=str(backend))base_pg_options._timeout=timeoutpg:ProcessGroup=ProcessGroup(store,rank,group_size,base_pg_options)# Users then need to create a backend option to create the comm backend (e.g., ProcessGroupNCCL)pg_options=ProcessGroupNCCL.Options()backend=ProcessGroupNCCL(store,rank,group_size,pg_options)
    • But from 2.5 onwards, users don’t need to pass in an option to create an instance of ProcessGroup and user can still set default backend for the pg since users still try to get default backend in the code:
    # No basic option is passed in when creating a instance of ProcessGrouppg:ProcessGroup=ProcessGroup(store,rank,group_size)pg._set_default_backend(...
Read more
Loading
voxlol, duanzhiihao, johnnynunez, bryanlimy, mhyrzt, leonardodepaula, SagatdinovEmil, etiennelndr, mrverdant13, jepjoo, and 66 more reacted with thumbs up emojiQuantumChemist, jjerphan, Puiching-Memory, bryanlimy, mihaimoga, fcogidi, AYUSHMIT, johnnynunez, mrverdant13, parlance-zz, and 25 more reacted with hooray emojiatalman, 0joseDark, cuicaihao, parlance-zz, iceychris, bryanlimy, GraceKafuu, Denisskas, inikishev, GoodCoder666, and 10 more reacted with heart emojiPuiching-Memory, bryanlimy, johnnynunez, mrverdant13, parlance-zz, cuicaihao, gau-nernst, iceychris, wangling1820, VictorSantos674, and 12 more reacted with rocket emojiPaxsenix0, GraceKafuu, Denisskas, fpaupier, GoodCoder666, binbjz, Geo99pro, wanderingeek, and wbigat reacted with eyes emoji
105 people reacted

PyTorch 2.4.1 Release, bug fix release

04 Sep 19:59
ee1b680
This commit was created on GitHub.com and signed with GitHub’sverified signature.
GPG key ID:B5690EEEBB952194
Verified
Learn about vigilant mode.
Compare
Choose a tag to compare
Loading

This release is meant to fix the following issues (regressions / silent correctness):

Breaking Changes:

  • The pytorch/pytorch docker image now installs the PyTorch package through pip and has switch its conda installation from miniconda tominiforge (#134274)

Windows:

  • Fix performance regression on Windows related to MKL static linking (#130619) (#130697)
  • Fix error during loading on Windows: [WinError 126] The specified module could not be found. (#131662) (#130697)

MPS:

  • Fix tensor.clamp produces wrong values (#130226)
  • Fix Incorrect result from batch norm with sliced inputs (#133610)

ROCM:

  • Fix for launching kernel invalid config error when calling embedding with large index (#130994)
  • Added a check and a warning when attempting to use hipBLASLt on an unsupported architecture (#128753)
  • Fix image corruption with Memory Efficient Attention when running HuggingFace Diffusers Stable Diffusion 3 pipeline (#133331)

Distributed:

  • Fix FutureWarning when using torch.load internally (#130663)
  • Fix FutureWarning when using torch.cuda.amp.autocast internally (#130660)

Torch.compile:

  • Fix exception with torch compile when onnxruntime-training and deepspeed packages are installed. (#131194)
  • Fix silent incorrectness with torch.library.custom_op with mutable inputs and torch.compile (#133452)
  • Fix SIMD detection on Linux ARM (#129075)
  • Do not use C++20 features in cpu_inducotr code (#130816)

Packaging:

  • Fix for exposing statically linked libstdc++ CXX11 ABI symbols (#134494)
  • Fix error while building pytorch from source due to not missing QNNPACK module (#131864)
  • Make PyTorch buildable from source on PowerPC (#129736)
  • Fix XPU extension building (#132847)

Other:

  • Fix warning when using pickle on a nn.Module that contains tensor attributes (#130246)
  • Fix NaNs return in MultiheadAttention when need_weights=False (#130014)
  • Fix nested tensor MHA produces incorrect results (#130196)
  • Fix error when using torch.utils.flop_counter.FlopCounterMode (#134467)

Tracked Regressions:

  • The experimental remote caching feature for Inductor's autotuner (enabled via TORCHINDUCTOR_AUTOTUNE_REMOTE_CACHE) is known to still be broken in this release and actively worked on in main. Following Error is generated: redis.exceptions.DataError: Invalid input of type: 'dict'. Please use nightlies if you need this feature (reported and Fixed by PR:#134032)

Release tracker#132400 contains all relevant pull requests related to this release as well as links to related issues.

Loading
etiennelndr, Jared-woodruff, D0n-A, moh-salih, Robbin1998s, banyan-god, dvquy13, vilsonrodrigues, VidathChamikara, zhouzaida, and 33 more reacted with thumbs up emojiyoshoku, 651961, Robbin1998s, Denisskas, ken-morel, Atharvkote, ancestor-mithril, wanderingeek, iceychris, binbjz, and 7 more reacted with hooray emojihammaad2002, bigcat88, K-H-Ismail, Benetti-Hub, Burhan-Q, QuantumChemist, Alarmod, twoertwein, Heliodex, 651961, and 17 more reacted with heart emojiInhabitancyCocoon, GnafiY, AmeenAli, and sudoflex reacted with rocket emojiPaxsenix0, Puiching-Memory, Denisskas, wanderingeek, binbjz, brian316, lunnyliu, and Nullkooland reacted with eyes emoji
77 people reacted

PyTorch 2.4: Python 3.12, AOTInductor freezing, libuv backend for TCPStore

24 Jul 18:39
d990dad
This commit was created on GitHub.com and signed with GitHub’sverified signature.
GPG key ID:B5690EEEBB952194
Verified
Learn about vigilant mode.
Compare
Choose a tag to compare
Loading

PyTorch 2.4 Release Notes

  • Highlights
  • Tracked Regressions
  • Backward incompatible changes
  • Deprecations
  • New features
  • Improvements
  • Bug Fixes
  • Performance
  • Documentation
  • Developers
  • Security

Highlights

We are excited to announce the release of PyTorch® 2.4!
PyTorch 2.4 adds support for the latest version of Python (3.12) fortorch.compile.
AOTInductor freezing gives developers running AOTInductor more performance based optimizations by allowing the
serialization of MKLDNN weights. As well, a new default TCPStore server backend utilizinglibuv has been introduced
which should significantly reduce initialization times for users running large-scale jobs.
Finally, a new Python Custom Operator API makes it easier than before to integrate custom kernels
into PyTorch, especially fortorch.compile.

This release is composed of 3661 commits and 475 contributors since PyTorch 2.3. We want to sincerely thank our
dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we
improve 2.4. More information about how to get started with the PyTorch 2-series can be found at our
Getting Started page.

BetaPrototypePerformance Improvements
Python 3.12 support for torch.compileFSDP2: DTensor-based per-parameter-sharding FSDPtorch.compile optimizations for AWS Graviton (aarch64-linux) processors
AOTInductor Freezing for CPUtorch.distributed.pipelining, simplified pipeline parallelismBF16 symbolic shape optimization in TorchInductor
New Higher-level Python Custom Operator APIIntel GPU is available through source buildPerformance optimizations for GenAI projects utilizing CPU devices
Switching TCPStore’s default server backend to libuv

*To see a full list of public feature submissions clickhere.

Tracked Regressions

Subproc exception with torch.compile and onnxruntime-training

There is a reported issue (#131070) when usingtorch.compile ifonnxruntime-training lib is
installed. The issue will be fixed (#131194) in v2.4.1. It can be solved locally by setting the environment variable
TORCHINDUCTOR_WORKER_START=fork before executing the script.

cu118 wheels will not work with pre-cuda12 drivers

It was also reported (#130684) that the new version of triton uses cuda features that are not compatible with pre-cuda12 drivers.
In this case, theworkaround is to set
TRITON_PTXAS_PATH manually as follows (adapt the code according to the local installation path):

TRITON_PTXAS_PATH=/usr/local/lib/python3.10/site-packages/torch/bin/ptxas  python script.py

Backwards Incompatible Change

Python frontend

DefaultTreadPool size to number of physical cores (#125963)

Changed the default number of threads used for intra-op parallelism from the number of logical cores to the number of
physical cores. This should reduce core oversubscribing when running CPU workload and improve performance.
Previous behavior can be recovered by using torch.set_num_threads to set the number of threads to the desired value.

Fixtorch.quasirandom.SobolEngine.draw default dtype handling (#126781)

The default dtype value has been changed fromtorch.float32 to the current default dtype as given by
torch.get_default_dtype() to be consistent with other APIs.

Forbid subclassingtorch._C._TensorBase directly (#125558)

This is an internal subclass that a user used to be able to create an object that is almost a Tensor in Python and was
advertised as such in some tutorials. This is not allowed anymore to improve consistency and all users should
subclass torch.Tensor directly.

Composability

Non-compositional usages of as_strided + mutation undertorch.compile will raise an error (#122502)

Thetorch.compile flow involves functionalizing any mutations inside the region being compiled. Torch.as_strided is
an existing view op that can be used non-compositionally: meaning when you call x.as_strided(...), as_strided will only
consider the underlying storage size of x, and ignore its current size/stride/storage_offset when creating a new view.
This makes it difficult to safely functionalize mutations on views of as_strided that are created non-compositionally,
so we ban them rather than risking silent correctness issues under torch.compile.

An example of a non-compositional usage of as_strided followed by mutation that we will error on is below. You can avoid
this issue by re-writing your usage of as_strided so that it is compositional (for example: either use a different set
of view ops instead of as_strided, or call as_strided directly on the base tensor instead of an existing view of it).

@torch.compiledeffoo(a):e=a.diagonal()# as_strided is being called on an existing view (e),# making it non-compositional. mutations to f under torch.compile# are not allowed, as we cannot easily functionalize them safelyf=e.as_strided((2,), (1,),0)f.add_(1.0)returna

We now verify schemas of custom ops at registration time (#124520)

Previously, you could register a custom op through the operator registration APIs, but give it a schema that contained
types unknown to the PyTorch Dispatcher. This behavior came from TorchScript, where “unknown” types were implicitly
treated by the TorchScript interpreter as type variables. However, calling such a custom op through regular pytorch
would result in an error later. As of 2.4, we will raise an error at registration time, when you first register the
custom operator. You can get the old behavior by constructing the schema with allow_typevars=true.

TORCH_LIBRARY(my_ns, m) {  // this now raises an error at registration time: bar/baz are unknown types  m.def("my_ns::foo(bar t) -> baz");  // you can get back the old behavior with the below flag  m.def(torch::schema("my_ns::foo(bar t) -> baz", /*allow_typevars*/ true));}

Autograd frontend

Delete torch.autograd.function.traceable APIs (#122817)

The torch.autograd.function.traceable(...) API, which sets the is_traceable class attribute
on a torch.autograd.Function class was deprecated in 2.3 and is now being deleted.
This API does not do anything and was only meant for internal purposes.
The following raised an warning in 2.3, and now errors because the API has been deleted:

@torch.autograd.function.traceableclassFunc(torch.autograd.Function):    ...

Release engineering

  • Remove caffe2 db and distributed from build system (#125092)

Optim

  • RemoveSparseAdam weird allowance of raw Tensor input (#127081).

Distributed

DeviceMesh

Update get_group and add get_all_groups (#128097)
In 2.3 and before, users can do:

mesh_2d=init_device_mesh("cuda", (2,2),mesh_dim_names=("dp","tp"))mesh_2d.get_group()# This will return all sub-pgs within the meshassertmesh_2d.get_group()[0]==mesh_2d.get_group(0)assertmesh_2d.get_group()[1]==mesh_2d.get_group(1)

But from 2.4 forward, if users callget_group without passing in the dim, users will get aRuntimeError.
Instead, they should useget_all_groups:

mesh_2d=init_device_mesh("cuda", (2,2),mesh_dim_names=("dp","tp"))mesh_2d.get_group()# This will throw a RuntimeErrorassertmesh_2d.get_all_groups()[0]==mesh_2d.get_group(0)assertmesh_2d.get_all_groups()[1]==mesh_2d.get_group(1)

Pipelining

Retire torch.distributed.pipeline (#127354)
In 2.3 and before, users can do:

importtorch.distributed.pipeline# warning saying that this will be removed and users need to migrate to torch.distributed.pipelining

But from 2.4 forward, if users write the code above, users will get aModuleNotFound error.
Instead, they should usetorch.distributed.pipelining:

importtorch.distributed.pipeline# -> ModuleNotFoundErrorimporttorch.distributed.pipelining

jit

  • Fix serialization/deepcopy behavior for tensors that are aliasing but not equal (#126126)

Fx

Complete revamp of float/promotion sympy handling (#126905)

ONNX

  • Remove caffe2 contrib and experiments (#125038)

Deprecations

Python frontend

  • User warning when usingtorch.load with defaultweights_only=False value (#129239,#129396,#129509).
    A warning is now raised if the weights_only value is not specified during a call to torch.load, encouraging users to
    adopt the safest practice when loading weights.
  • Deprecate device-specific autocast API (#126062)
    All the autocast APIs are unified under torch.amp and it can be used as a drop-in replacement for torch.{device}.amp APIs
    (passing a device argument where applicable)..
  • Export torch.newaxis=None for Python Array API/Numpy consistency (#125026)

Composability

  • Deprecate calling FakeTensor.data_ptr in eager-mode. FakeTensors are tensors without a valid data pointer, so in
    general their data pointer is not safe to access. This makes it easier fortorch.compile to provide a nice error
    message when tracing custom ops into a graph that are not written in a PT2-friendly way (bec...
Read more
Loading
tolgacangoz, Borda, redradist, bryanlimy, D0n-A, xsa-dev, akihironitta, etiennelndr, Mickychen00, binbjz, and 44 more reacted with thumbs up emojishang-mt reacted with laugh emojitolgacangoz, nviraj, redradist, bryanlimy, nairbv, akihironitta, saeedark, Mickychen00, atalman, xuchenhao001, and 20 more reacted with hooray emojitolgacangoz, redradist, bryanlimy, ashim-mahara, hammaad2002, akashaero, khushi-411, akihironitta, Kakaymi10, Mickychen00, and 10 more reacted with heart emojitolgacangoz, debnath-d, redradist, bryanlimy, gau-nernst, akihironitta, jamesETsmith, binbjz, orion160, qcind, and 9 more reacted with rocket emojitolgacangoz, bryanlimy, Paxsenix0, binbjz, andre-brainn, waheedi, Denisskas, james-banks, AndrewDiMola, and GoodCoder666 reacted with eyes emoji
89 people reacted

PyTorch 2.3.1 Release, bug fix release

05 Jun 19:16
63d5e92
This commit was created on GitHub.com and signed with GitHub’sverified signature.
GPG key ID:B5690EEEBB952194
Verified
Learn about vigilant mode.
Compare
Choose a tag to compare
Loading

This release is meant to fix the following issues (regressions / silent correctness):

Torch.compile:

  • Remove runtime dependency on JAX/XLA, when importingtorch.__dynamo (#124634)
  • HidePlan failed with a cudnnException warning (#125790)
  • Fix CUDA memory leak (#124238) (#120756)

Distributed:

  • Fixformat_utils executable, which was causing it to run as a no-op (#123407)
  • Fix regression withdevice_mesh in 2.3.0 during initialization causing memory spikes (#124780)
  • Fix crash ofFSDP + DTensor withShardingStrategy.SHARD_GRAD_OP (#123617)
  • Fix failure with distributed checkpointing + FSDP if at least 1 forward/backward pass has not been run. (#121544) (#127069)
  • Fix error with distributed checkpointing + FSDP, and withuse_orig_params = False and activation checkpointing (#124698) (#126935)
  • Fixset_model_state_dict errors on compiled module with non-persistent buffer with distributed checkpointing (#125336) (#125337)

MPS:

  • Fix data corruption when coping large (>4GiB) tensors (#124635)
  • FixTensor.abs() for complex (#125662)

Packaging:

Other:

  • Fix DeepSpeed transformer extension build on ROCm (#121030)
  • Fix kernel crash ontensor.dtype.to_complex() after ~100 calls in ipython kernel (#125154)

Release tracker#125425 contains all relevant pull requests related to this release as well as links to related issues.

Loading
rino2000, function2-llx, antoinebrl, arvinsingh, zhanwenchen, hammaad2002, kmaehashi, wanderingeek, ZZHanyu, VermiIIi0n, and 44 more reacted with thumbs up emoji9bow reacted with laugh emojiviktor765, yoshoku, kmaehashi, wanderingeek, qcind, khushi-411, tobygh, andre-brainn, john-pixforce, johnnv1, and 5 more reacted with hooray emojitobygh, andre-brainn, hammaad2002, 9bow, M0nteCarl0, avdhoeke, tuningManBin, tamimuddin, sachithdickwella, JoelPasapera, and 3 more reacted with heart emojiqcind, khushi-411, tobygh, AntonioBerna, andre-brainn, vilsonrodrigues, wanderingeek, N-Friederich, 9bow, sachithdickwella, and 4 more reacted with rocket emoji
70 people reacted

PyTorch 2.3: User-Defined Triton Kernels in torch.compile, Tensor Parallelism in Distributed

24 Apr 16:12
97ff6cf
This commit was created on GitHub.com and signed with GitHub’sverified signature.
GPG key ID:B5690EEEBB952194
Verified
Learn about vigilant mode.
Compare
Choose a tag to compare
Loading

PyTorch 2.3 Release notes

  • Highlights
  • Backwards Incompatible Changes
  • Deprecations
  • New Features
  • Improvements
  • Bug fixes
  • Performance
  • Documentation

Highlights

We are excited to announce the release of PyTorch® 2.3! PyTorch 2.3 offers support for user-defined Triton kernels in torch.compile, allowing for users to migrate their own Triton kernels from eager without experiencing performance complications or graph breaks. As well, Tensor Parallelism improves the experience for training Large Language Models using native PyTorch functions, which has been validated on training runs for 100B parameter models.

This release is composed of 3393 commits and 426 contributors since PyTorch 2.2. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.3. More information about how to get started with the PyTorch 2-series can be found at ourGetting Started page.

StableBetaPrototypePerformance Improvements
User-defined Triton kernels in torch.compiletorch.export adds new API to specify dynamic_shapesWeight-Only-Quantization introduced into Inductor CPU backend
Tensor parallelism within PyTorch DistributedAsynchronous checkpoint generation
Support for semi-structured sparsity

*To see a full list of public feature submissions clickhere.

Tracked Regressions

torch.compile on MacOS is considered unstable for 2.3 as there are known cases where it will hang (#124497)

torch.compile imports many unrelated packages when it is invoked (#123954)

This can cause significant first-time slowdown and instability when these packages are not fully compatible with PyTorch within a single process.

torch.compile is not supported on Python 3.12 (#120233)

PyTorch support for Python 3.12 in general is considered experimental. Please use Python version between 3.8 and 3.11 instead. This is an existing issue since PyTorch 2.2.

Backwards Incompatible Changes

Change default torch_function behavior to be disabled when torch_dispatch is defined (#120632)

Defining a subclass with atorch_dispatch entry will now automatically settorch_function to be disabled. This aligns better with all the use cases we’ve observed for subclasses. The main change of behavior is that the result of the torch_dispatch handler will not go through the default torch_function handler anymore, wrapping it into the current subclass. This allows in particular for your subclass to return a plain Tensor or another subclass from any op.

The original behavior can be recovered by adding the following to your Tensor subclass:

@classmethoddef__torch_function__(cls,func,types,args=(),kwargs=None):returnsuper().__torch_function__(func,types,args,kwargs)

ProcessGroupNCCL removes multi-device-per-thread support from C++ level (#119099,#118674)

  • Python level support was removed in 2.2.
  • To simplify ProcessGroupNCCL’s code, we remove support for multiple cuda devices per thread. To our knowledge, this is not an active use case, but it adds a large burden to our codebase. If you are relying on this, there is no workaround other than rewriting your pytorch program to use one device per process or one device per thread (multi-threads per process is still supported).

Removesno_dist andcoordinator_rank from public DCP API's (#121317)

As part of an overall effort to simplify our public facing API's for Distributed Checkpointing, we've decided to deprecate usage of thecoordinator_rank andno_dist parameters undertorch.distributed.checkpoint. In our opinion, these parameters can lead to confusion around the intended effect during API usage, and have limited value to begin with. One concrete example is here,#118337, where there is ambiguity in which Process Group is referenced by the coordinator rank (additional context:#118337). In the case of theno_dist parameter, we consider this an implementation detail which should be hidden from the user. Starting in this release,no_dist is inferred from the initialized state of the process group, assuming the intention is to use collectives if a process group is initialized, and assuming the opposite in the case it is not.

2.22.3
# Version 2.2.2importtorch.distributed.checkpointasdcpdcp.save(state_dict={"model":model.state_dict()},checkpoint_id="path_to_model_checkpoint"no_dist=True,coordinator_rank=0)# ...dcp.load(state_dict={"model":model.state_dict()},checkpoint_id="path_to_model_checkpoint"no_dist=True,coordinator_rank=0)
# Version 2.2.3# no dist is assumed from pg state, and rank 0 is always coordinator.importtorch.distributed.checkpointasdcpdcp.save(state_dict={"model":model.state_dict()},checkpoint_id="path_to_model_checkpoint")# ...dcp.load(state_dict={"model":model.state_dict()},checkpoint_id="path_to_model_checkpoint")

Remove deprecated tp_mesh_dim arg (#121432)

Starting from PyTorch 2.3,parallelize_module API only accepts a DeviceMesh (thetp_mesh_dim argument has been removed). If having a N-D DeviceMesh for multi-dimensional parallelism, you can usemesh_nd["tp"] to obtain a 1-D DeviceMesh for tensor parallelism.

torch.export

  • Users must pass in an nn.Module to torch.export.export. The reason is that we have several invariants the ExportedProgram that are ambiguous if the top-level object being traced is a function, such as how we guarantee that every call_function node has an nn_module_stack populated, and we offer ways to access the state_dict/parameters/buffers of the exported program. We'd like torch.export to offer strong invariants—the value proposition of export is that you can trade flexibility for stronger guarantees about your model. (#117528)
  • Removed constraints in favor of dynamic_shapes (#117573,#117917,#117916,#120981,#120979)
  • ExportedProgram is no longer a callable. Instead users will need to use .module() to call the ExportedProgram. This is to prevent users from treating ExportedPrograms as torch.nn.Modules as we do not plan to support all features that torch.nn.Modules have, like hooks. Instead users can create a proper torch.nn.Module through exported_program.module() and use that as a callable. (#120019,#118425,#119105)
  • Remove equality_constraints from ExportedProgram as it is not used or useful anymore. Dimensions with equal constraints will now have the same symbol. (#116979)
  • Remove torch._export.export in favor of torch.export.export (#119095)
  • Remove CallSpec (#117671)

Enable fold_quantize by default in PT2 Export Quantization (#118701,#118605,#119425,#117797)

Previously, the PT2 Export Quantization flow did not generate quantized weight by default, but instead used fp32 weight in the quantized model in this pattern:fp32 weight -> q -> dq -> linear. Settingfold_quantize=True produces a graph with quantized weights in the quantized model in this pattern by default after convert_pt2e, and users will see a reduction in the model size:int8 weight -> dq -> linear.

2.22.3
folded_model=convert_pt2e(model,fold_quantize=True)non_folded_model=convert_pt2e(model)
folded_model=convert_pt2e(model)non_folded_model=convert_pt2e(model,fold_quantize=False)

Remove deprecated torch.jit.quantized APIs (#118406)

All functions and classes undertorch.jit.quantized will now raise an error if called/instantiated. This API has long been deprecated in favor oftorch.ao.nn.quantized.

2.22.3
# torch.jit.quantized APIstorch.jit.quantized.quantize_rnn_cell_modulestorch.jit.quantized.quantize_rnn_modulestorch.jit.quantized.quantize_linear_modulestorch.jit.quantized.QuantizedLineartorch.jit.QuantizedLinearFP16torch.jit.quantized.QuantizedGRUtorch.jit.quantized.QuantizedGRUCelltorch.jit.quantized.QuantizedLSTMtorch.jit.quantized.QuantizedLSTMCell
# Corresponding torch.ao.quantization APIstorch.ao.nn.quantized.dynamic.RNNCelltorch.ao.quantization.quantize_dynamicAPIstorch.ao.nn.quantized.dynamic.Lineartorch.ao.nn.quantized.dynamic.GRUtorch.ao.nn.quantized.dynamic.GRUCelltorch.ao.nn.quantized.dynamic.LSTM

...

Read more
Loading
etiennelndr, 651961, matth-blt, johnnynunez, shink, Burhan-Q, oraluben, Abellegese, Pedro-e, holdjun, and 29 more reacted with thumbs up emoji651961, matth-blt, Burhan-Q, Abellegese, wanderingeek, pvti, akihironitta, bryanlimy, Chubercik, gavin-hyl, and 7 more reacted with hooray emojijulian-8897, atalman, ab-smith, yzhangcs, cauliyang, kanishkanarch, vilsonrodrigues, mcaccin, madpeh, Rohanjames1997, and 40 more reacted with heart emojiatalman, Separius, kanishkanarch, wanchaol, debnath-d, luncliff, Neleka, nick-konovalchuk, fkouteib, zichunhao, and 25 more reacted with rocket emoji
100 people reacted

PyTorch 2.2.2 Release, bug fix release

27 Mar 22:27
39901f2
This commit was created on GitHub.com and signed with GitHub’sverified signature.
GPG key ID:B5690EEEBB952194
Verified
Learn about vigilant mode.
Compare
Choose a tag to compare
Loading

This release is meant to fix the following issues (regressions / silent correctness):

  • Properly raise an error when trying to use inductor backend on non-supported platforms such as Windows (#115969)
  • Fix mkldnn performance issue on Windows platform (#121618)
  • FixRuntimeError: cannot create std::vector larger than max_size() intorch.nn.functional.conv1d on non-contiguous cpu inputs by patching OneDNN (pytorch/builder#1742) (pytorch/builder#1744)
  • Add support fortorch.distributed.fsdp.StateDictType.FULL_STATE_DICT for when usingtorch.distributed.fsdp.FullyShardedDataParallel with thedevice_mesh argument (#120837)
  • Fixmake triton command on release branch for users building the release branch from source (#121169)
  • Ensure gcc>=9.0 for build from source and cpp_extensions (#120126)
  • Fix cxx11-abi build in release branch (pytorch/builder#1709)
  • Fix building from source on Windows source MSVC 14.38 - VS 2022 (#122120)

Release tracker#120999 contains all relevant pull requests related to this release as well as links to related issues.

Loading
zhenrong-wang, bailidongjun, huydhn, wanderingeek, huynhdev24, lucadiliello, ibrahim324, XJAUJSJZZY, jojuo123, zenmhui, and 6 more reacted with thumbs up emojijwrh, azevedoguigo, zhenrong-wang, bailidongjun, zenmhui, lin72h, csukuangfj, and Youcantstopme2744 reacted with laugh emojihelderc, arvinsingh, unadlib, bayesianbrad, 651961, cdluminate, thr3a, prantoran, ngdlmk, fkouteib, and 28 more reacted with hooray emojizhenrong-wang, bailidongjun, black0017, zenmhui, santhoshkammari, lin72h, csukuangfj, vikram71198, and Youcantstopme2744 reacted with heart emojiN-Friederich, zhenrong-wang, bailidongjun, wanderingeek, lin72h, csukuangfj, and Youcantstopme2744 reacted with rocket emojileo-smi, zhenrong-wang, bailidongjun, and csukuangfj reacted with eyes emoji
55 people reacted
Previous13456
Previous

[8]ページ先頭

©2009-2025 Movatter.jp