Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitaf0aa36

Browse files
authored
Merge branch 'site' into site
2 parents1c48f80 +f45f98d commitaf0aa36

File tree

2,518 files changed

+8565
-5755
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

2,518 files changed

+8565
-5755
lines changed

‎docs/2.2/amp.html‎

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -630,7 +630,7 @@ <h1>Automatic Mixed Precision package - torch.amp<a class="headerlink" href="#au
630630
<p><codeclass="docutils literal notranslate"><spanclass="pre">autocast(enabled=False)</span></code> subregions can be nested in autocast-enabled regions.
631631
Locally disabling autocast can be useful, for example, if you want to force a subregion
632632
to run in a particular<codeclass="docutils literal notranslate"><spanclass="pre">dtype</span></code>. Disabling autocast gives you explicit control over
633-
the execution type.In the subregion, inputs from the surrounding region
633+
the execution type. In the subregion, inputs from the surrounding region
634634
should be cast to<codeclass="docutils literal notranslate"><spanclass="pre">dtype</span></code> before use:</p>
635635
<divclass="highlight-default notranslate"><divclass="highlight"><pre><span></span><spanclass="c1"># Creates some tensors in default dtype (here assumed to be float32)</span>
636636
<spanclass="n">a_float32</span><spanclass="o">=</span><spanclass="n">torch</span><spanclass="o">.</span><spanclass="n">rand</span><spanclass="p">((</span><spanclass="mi">8</span><spanclass="p">,</span><spanclass="mi">8</span><spanclass="p">),</span><spanclass="n">device</span><spanclass="o">=</span><spanclass="s2">&quot;cuda&quot;</span><spanclass="p">)</span>
@@ -1734,4 +1734,4 @@ <h2>Resources</h2>
17341734
})
17351735
</script>
17361736
</body>
1737-
</html>
1737+
</html>

‎docs/2.3/_images/RReLU.png‎

158 Bytes
Loading

‎docs/2.3/_modules/torch/distributed/device_mesh.html‎

Lines changed: 35 additions & 67 deletions
Large diffs are not rendered by default.

‎docs/2.3/_modules/torch/fx/node.html‎

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -508,12 +508,15 @@ <h1>Source code for torch.fx.node</h1><div class="highlight"><pre>
508508
<spanclass="n">torch</span><spanclass="o">.</span><spanclass="n">amp</span><spanclass="o">.</span><spanclass="n">_exit_autocast</span><spanclass="p">,</span>
509509
<spanclass="p">}</span>
510510

511+
<spanclass="c1"># TODO: Either refactor this into 2 functions 1 dce for functional graphs and 1 dce for all graphs,</span>
512+
<spanclass="c1"># or add logic to correctly mark all inplace ops as side effectful.</span>
511513
<spanclass="n">_side_effectful_functions</span><spanclass="p">:</span><spanclass="n">Set</span><spanclass="p">[</span><spanclass="n">Callable</span><spanclass="p">]</span><spanclass="o">=</span><spanclass="p">{</span>
512514
<spanclass="n">torch</span><spanclass="o">.</span><spanclass="n">_assert</span><spanclass="p">,</span>
513515
<spanclass="n">torch</span><spanclass="o">.</span><spanclass="n">_assert_async</span><spanclass="p">,</span>
514516
<spanclass="n">_ops</span><spanclass="o">.</span><spanclass="n">aten</span><spanclass="o">.</span><spanclass="n">_assert_async</span><spanclass="o">.</span><spanclass="n">msg</span><spanclass="p">,</span>
515517
<spanclass="n">_ops</span><spanclass="o">.</span><spanclass="n">aten</span><spanclass="o">.</span><spanclass="n">_assert_scalar</span><spanclass="o">.</span><spanclass="n">default</span><spanclass="p">,</span>
516518
<spanclass="n">_ops</span><spanclass="o">.</span><spanclass="n">aten</span><spanclass="o">.</span><spanclass="n">copy_</span><spanclass="o">.</span><spanclass="n">default</span><spanclass="p">,</span>
519+
<spanclass="n">_ops</span><spanclass="o">.</span><spanclass="n">aten</span><spanclass="o">.</span><spanclass="n">index_put_</span><spanclass="o">.</span><spanclass="n">default</span><spanclass="p">,</span>
517520
<spanclass="n">_ops</span><spanclass="o">.</span><spanclass="n">aten</span><spanclass="o">.</span><spanclass="n">sym_constrain_range</span><spanclass="o">.</span><spanclass="n">default</span><spanclass="p">,</span>
518521
<spanclass="n">_ops</span><spanclass="o">.</span><spanclass="n">aten</span><spanclass="o">.</span><spanclass="n">sym_constrain_range_for_size</span><spanclass="o">.</span><spanclass="n">default</span><spanclass="p">,</span>
519522
<spanclass="n">_ops</span><spanclass="o">.</span><spanclass="n">profiler</span><spanclass="o">.</span><spanclass="n">_record_function_enter</span><spanclass="p">,</span>

‎docs/2.3/_modules/torch/nested.html‎

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -639,11 +639,8 @@ <h1>Source code for torch.nested</h1><div class="highlight"><pre>
639639
<spanclass="n">requires_grad</span><spanclass="o">=</span><spanclass="n">requires_grad</span><spanclass="p">,</span>
640640
<spanclass="n">pin_memory</span><spanclass="o">=</span><spanclass="n">pin_memory</span><spanclass="p">)</span>
641641
<spanclass="k">elif</span><spanclass="n">layout</span><spanclass="o">==</span><spanclass="n">torch</span><spanclass="o">.</span><spanclass="n">jagged</span><spanclass="p">:</span>
642-
<spanclass="c1"># Need to:</span>
643-
<spanclass="c1"># * Detach tensors to discard autograd history</span>
644-
<spanclass="c1"># * Wrap lists of scalars as tensors</span>
645-
<spanclass="n">list_of_tensors</span><spanclass="o">=</span><spanclass="p">[</span><spanclass="n">t</span><spanclass="o">.</span><spanclass="n">detach</span><spanclass="p">()</span><spanclass="k">if</span><spanclass="nb">isinstance</span><spanclass="p">(</span><spanclass="n">t</span><spanclass="p">,</span><spanclass="n">Tensor</span><spanclass="p">)</span><spanclass="k">else</span><spanclass="n">torch</span><spanclass="o">.</span><spanclass="n">as_tensor</span><spanclass="p">(</span><spanclass="n">t</span><spanclass="p">)</span>
646-
<spanclass="k">for</span><spanclass="n">t</span><spanclass="ow">in</span><spanclass="n">tensor_list</span><spanclass="p">]</span>
642+
<spanclass="c1"># Need to wrap lists of scalars as tensors</span>
643+
<spanclass="n">list_of_tensors</span><spanclass="o">=</span><spanclass="p">[</span><spanclass="n">t</span><spanclass="k">if</span><spanclass="nb">isinstance</span><spanclass="p">(</span><spanclass="n">t</span><spanclass="p">,</span><spanclass="n">Tensor</span><spanclass="p">)</span><spanclass="k">else</span><spanclass="n">torch</span><spanclass="o">.</span><spanclass="n">as_tensor</span><spanclass="p">(</span><spanclass="n">t</span><spanclass="p">)</span><spanclass="k">for</span><spanclass="n">t</span><spanclass="ow">in</span><spanclass="n">tensor_list</span><spanclass="p">]</span>
647644

648645
<spanclass="kn">from</span><spanclass="nn">torch.nested._internal.nested_tensor</span><spanclass="kn">import</span><spanclass="n">jagged_from_list</span>
649646

‎docs/2.3/_modules/torch/nn/modules/module.html‎

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -482,6 +482,7 @@ <h1>Source code for torch.nn.modules.module</h1><div class="highlight"><pre>
482482
<spanclass="kn">from</span><spanclass="nn">typing</span><spanclass="kn">import</span><spanclass="n">Union</span><spanclass="p">,</span><spanclass="n">Tuple</span><spanclass="p">,</span><spanclass="n">Any</span><spanclass="p">,</span><spanclass="n">Callable</span><spanclass="p">,</span><spanclass="n">Iterator</span><spanclass="p">,</span><spanclass="n">Set</span><spanclass="p">,</span><spanclass="n">Optional</span><spanclass="p">,</span><spanclass="n">overload</span><spanclass="p">,</span><spanclass="n">TypeVar</span><spanclass="p">,</span><spanclass="n">Mapping</span><spanclass="p">,</span><spanclass="n">Dict</span><spanclass="p">,</span><spanclass="n">List</span>
483483
<spanclass="kn">from</span><spanclass="nn">typing_extensions</span><spanclass="kn">import</span><spanclass="n">Self</span>
484484
<spanclass="kn">from</span><spanclass="nn">...utils.hooks</span><spanclass="kn">import</span><spanclass="n">RemovableHandle</span>
485+
<spanclass="kn">from</span><spanclass="nn">torch.utils._python_dispatch</span><spanclass="kn">import</span><spanclass="n">is_traceable_wrapper_subclass</span>
485486

486487
<spanclass="n">__all__</span><spanclass="o">=</span><spanclass="p">[</span><spanclass="s1">&#39;register_module_forward_pre_hook&#39;</span><spanclass="p">,</span><spanclass="s1">&#39;register_module_forward_hook&#39;</span><spanclass="p">,</span>
487488
<spanclass="s1">&#39;register_module_full_backward_pre_hook&#39;</span><spanclass="p">,</span><spanclass="s1">&#39;register_module_backward_hook&#39;</span><spanclass="p">,</span>
@@ -1271,8 +1272,12 @@ <h1>Source code for torch.nn.modules.module</h1><div class="highlight"><pre>
12711272
<spanclass="k">with</span><spanclass="n">torch</span><spanclass="o">.</span><spanclass="n">no_grad</span><spanclass="p">():</span>
12721273
<spanclass="n">param_applied</span><spanclass="o">=</span><spanclass="n">fn</span><spanclass="p">(</span><spanclass="n">param</span><spanclass="p">)</span>
12731274
<spanclass="n">p_should_use_set_data</span><spanclass="o">=</span><spanclass="n">compute_should_use_set_data</span><spanclass="p">(</span><spanclass="n">param</span><spanclass="p">,</span><spanclass="n">param_applied</span><spanclass="p">)</span>
1275+
1276+
<spanclass="c1"># subclasses may have multiple child tensors so we need to use swap_tensors</span>
1277+
<spanclass="n">p_should_use_swap_tensors</span><spanclass="o">=</span><spanclass="n">should_use_swap_tensors</span><spanclass="ow">or</span><spanclass="n">is_traceable_wrapper_subclass</span><spanclass="p">(</span><spanclass="n">param_applied</span><spanclass="p">)</span>
1278+
12741279
<spanclass="n">param_grad</span><spanclass="o">=</span><spanclass="n">param</span><spanclass="o">.</span><spanclass="n">grad</span>
1275-
<spanclass="k">if</span><spanclass="n">should_use_swap_tensors</span><spanclass="p">:</span>
1280+
<spanclass="k">if</span><spanclass="n">p_should_use_swap_tensors</span><spanclass="p">:</span>
12761281
<spanclass="k">try</span><spanclass="p">:</span>
12771282
<spanclass="k">if</span><spanclass="n">param_grad</span><spanclass="ow">is</span><spanclass="ow">not</span><spanclass="kc">None</span><spanclass="p">:</span>
12781283
<spanclass="c1"># Accessing param.grad makes its at::Tensor&#39;s use_count 2, which will prevent swapping.</span>
@@ -1298,7 +1303,7 @@ <h1>Source code for torch.nn.modules.module</h1><div class="highlight"><pre>
12981303
<spanclass="k">with</span><spanclass="n">torch</span><spanclass="o">.</span><spanclass="n">no_grad</span><spanclass="p">():</span>
12991304
<spanclass="n">grad_applied</span><spanclass="o">=</span><spanclass="n">fn</span><spanclass="p">(</span><spanclass="n">param_grad</span><spanclass="p">)</span>
13001305
<spanclass="n">g_should_use_set_data</span><spanclass="o">=</span><spanclass="n">compute_should_use_set_data</span><spanclass="p">(</span><spanclass="n">param_grad</span><spanclass="p">,</span><spanclass="n">grad_applied</span><spanclass="p">)</span>
1301-
<spanclass="k">if</span><spanclass="n">should_use_swap_tensors</span><spanclass="p">:</span>
1306+
<spanclass="k">if</span><spanclass="n">p_should_use_swap_tensors</span><spanclass="p">:</span>
13021307
<spanclass="n">grad_applied</span><spanclass="o">.</span><spanclass="n">requires_grad_</span><spanclass="p">(</span><spanclass="n">param_grad</span><spanclass="o">.</span><spanclass="n">requires_grad</span><spanclass="p">)</span>
13031308
<spanclass="k">try</span><spanclass="p">:</span>
13041309
<spanclass="n">torch</span><spanclass="o">.</span><spanclass="n">utils</span><spanclass="o">.</span><spanclass="n">swap_tensors</span><spanclass="p">(</span><spanclass="n">param_grad</span><spanclass="p">,</span><spanclass="n">grad_applied</span><spanclass="p">)</span>

‎docs/2.3/_modules/torch/nn/modules/rnn.html‎

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -682,6 +682,7 @@ <h1>Source code for torch.nn.modules.rnn</h1><div class="highlight"><pre>
682682
<spanclass="bp">self</span><spanclass="o">.</span><spanclass="n">batch_first</span><spanclass="p">,</span><spanclass="nb">bool</span><spanclass="p">(</span><spanclass="bp">self</span><spanclass="o">.</span><spanclass="n">bidirectional</span><spanclass="p">))</span></div>
683683

684684
<spanclass="k">def</span><spanclass="nf">_apply</span><spanclass="p">(</span><spanclass="bp">self</span><spanclass="p">,</span><spanclass="n">fn</span><spanclass="p">,</span><spanclass="n">recurse</span><spanclass="o">=</span><spanclass="kc">True</span><spanclass="p">):</span>
685+
<spanclass="bp">self</span><spanclass="o">.</span><spanclass="n">_flat_weight_refs</span><spanclass="o">=</span><spanclass="p">[]</span>
685686
<spanclass="n">ret</span><spanclass="o">=</span><spanclass="nb">super</span><spanclass="p">()</span><spanclass="o">.</span><spanclass="n">_apply</span><spanclass="p">(</span><spanclass="n">fn</span><spanclass="p">,</span><spanclass="n">recurse</span><spanclass="p">)</span>
686687

687688
<spanclass="c1"># Resets _flat_weights</span>

‎docs/2.3/_sources/generated/exportdb/index.rst.txt‎

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,8 @@ support in export please create an issue in the pytorch/pytorch repo wih a modul
1919
:caption:Tags
2020

2121
torch.escape-hatch
22-
torch.cond
2322
torch.dynamic-shape
23+
torch.cond
2424
python.closure
2525
torch.dynamic-value
2626
python.data-structure
@@ -203,7 +203,7 @@ cond_branch_class_method
203203

204204
..note::
205205

206-
Tags::doc:`torch.cond<torch.cond>`,:doc:`torch.dynamic-shape<torch.dynamic-shape>`
206+
Tags::doc:`torch.dynamic-shape<torch.dynamic-shape>`,:doc:`torch.cond<torch.cond>`
207207

208208
Support Level: SUPPORTED
209209

@@ -284,7 +284,7 @@ cond_branch_nested_function
284284

285285
..note::
286286

287-
Tags::doc:`torch.cond<torch.cond>`,:doc:`torch.dynamic-shape<torch.dynamic-shape>`
287+
Tags::doc:`torch.dynamic-shape<torch.dynamic-shape>`,:doc:`torch.cond<torch.cond>`
288288

289289
Support Level: SUPPORTED
290290

@@ -363,7 +363,7 @@ cond_branch_nonlocal_variables
363363

364364
..note::
365365

366-
Tags::doc:`torch.cond<torch.cond>`,:doc:`torch.dynamic-shape<torch.dynamic-shape>`
366+
Tags::doc:`torch.dynamic-shape<torch.dynamic-shape>`,:doc:`torch.cond<torch.cond>`
367367

368368
Support Level: SUPPORTED
369369

@@ -528,7 +528,7 @@ cond_operands
528528

529529
..note::
530530

531-
Tags::doc:`torch.cond<torch.cond>`,:doc:`torch.dynamic-shape<torch.dynamic-shape>`
531+
Tags::doc:`torch.dynamic-shape<torch.dynamic-shape>`,:doc:`torch.cond<torch.cond>`
532532

533533
Support Level: SUPPORTED
534534

@@ -602,7 +602,7 @@ cond_predicate
602602

603603
..note::
604604

605-
Tags::doc:`torch.cond<torch.cond>`,:doc:`torch.dynamic-shape<torch.dynamic-shape>`
605+
Tags::doc:`torch.dynamic-shape<torch.dynamic-shape>`,:doc:`torch.cond<torch.cond>`
606606

607607
Support Level: SUPPORTED
608608

@@ -666,7 +666,7 @@ constrain_as_size_example
666666

667667
..note::
668668

669-
Tags::doc:`torch.dynamic-value<torch.dynamic-value>`,:doc:`torch.escape-hatch<torch.escape-hatch>`
669+
Tags::doc:`torch.escape-hatch<torch.escape-hatch>`,:doc:`torch.dynamic-value<torch.dynamic-value>`
670670

671671
Support Level: SUPPORTED
672672

@@ -726,7 +726,7 @@ constrain_as_value_example
726726

727727
..note::
728728

729-
Tags::doc:`torch.dynamic-value<torch.dynamic-value>`,:doc:`torch.escape-hatch<torch.escape-hatch>`
729+
Tags::doc:`torch.escape-hatch<torch.escape-hatch>`,:doc:`torch.dynamic-value<torch.dynamic-value>`
730730

731731
Support Level: SUPPORTED
732732

@@ -1240,7 +1240,7 @@ list_contains
12401240

12411241
..note::
12421242

1243-
Tags::doc:`torch.dynamic-shape<torch.dynamic-shape>`,:doc:`python.data-structure<python.data-structure>`,:doc:`python.assert<python.assert>`
1243+
Tags::doc:`torch.dynamic-shape<torch.dynamic-shape>`,:doc:`python.assert<python.assert>`,:doc:`python.data-structure<python.data-structure>`
12441244

12451245
Support Level: SUPPORTED
12461246

@@ -1286,7 +1286,7 @@ list_unpack
12861286

12871287
..note::
12881288

1289-
Tags::doc:`python.data-structure<python.data-structure>`,:doc:`python.control-flow<python.control-flow>`
1289+
Tags::doc:`python.control-flow<python.control-flow>`,:doc:`python.data-structure<python.data-structure>`
12901290

12911291
Support Level: SUPPORTED
12921292

@@ -2005,6 +2005,6 @@ Result:
20052005

20062006
..code-block::
20072007
2008-
Unsupported: torch.* op returned non-Tensor int call_function <function sym_min at0x7f268479fd30>
2008+
Unsupported: torch.* op returned non-Tensor int call_function <function sym_min at0x7f4d9cf5cd30>
20092009
20102010

‎docs/2.3/_sources/generated/exportdb/python.assert.rst.txt‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ list_contains
5151

5252
..note::
5353

54-
Tags::doc:`torch.dynamic-shape<torch.dynamic-shape>`,:doc:`python.data-structure<python.data-structure>`,:doc:`python.assert<python.assert>`
54+
Tags::doc:`torch.dynamic-shape<torch.dynamic-shape>`,:doc:`python.assert<python.assert>`,:doc:`python.data-structure<python.data-structure>`
5555

5656
Support Level: SUPPORTED
5757

‎docs/2.3/_sources/generated/exportdb/python.control-flow.rst.txt‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ list_unpack
5151

5252
..note::
5353

54-
Tags::doc:`python.data-structure<python.data-structure>`,:doc:`python.control-flow<python.control-flow>`
54+
Tags::doc:`python.control-flow<python.control-flow>`,:doc:`python.data-structure<python.data-structure>`
5555

5656
Support Level: SUPPORTED
5757

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp