Rate this Page

TorchDynamo APIs for fine-grained tracing#

Created On: Jul 28, 2023 | Last Updated On: Jun 16, 2025

Note

In this documenttorch.compiler.compile andtorch.compile are used interchangeably.Both versions will work in your code.

torch.compile performs TorchDynamo tracing on the whole user model.However, it is possible that a small part of the model code cannot behandled bytorch.compiler. In this case, you might want to disablethe compiler on that particular portion, while running compilation onthe rest of the model. This section describe the existing APIs thatuse to define parts of your code in which you want to skip compilationand the relevant use cases.

The API that you can use to define portions of the code on which you candisable compilation are listed in the following table:

TorchDynamo APIs to control fine-grained tracing#

API

Description

When to use?

torch.compiler.disable

Disables Dynamo on the decorated function as well as recursively invoked functions.

Excellent for unblocking a user, if a small portion of the model cannot be handled withtorch.compile.

torch._dynamo.disallow_in_graph

Disallows the marked op in the TorchDynamo graph. TorchDynamo causes graph break, and runs the op in the eager (no compile) mode.nnThis is suitable for the ops, whiletorch.compiler.disable is suitable for decorating functions.

This API is excellent for both debugging and unblocking if a custom op liketorch.ops.fbgemm.* is causing issues with thetorch.compile function.

torch.compile.allow_in_graph

The annotated callable goes as is in the TorchDynamo graph. For example, a black-box for TorchDynamo Dynamo.nnNote that AOT Autograd will trace through it, so theallow_in_graph is only a Dynamo-level concept.

This API is useful for portions of the model which have known TorchDynamo hard-to-support features, like hooks orautograd.Function. However, each usage ofallow_in_graphmust be carefully screened (no graph breaks, no closures).

torch._dynamo.graph_break

Adds a graph break. The code before and after the graph break goes through TorchDynamo.

Rarely useful for deployment - If you think you need this, most probably you need eitherdisable ordisallow_in_graph.

torch.compiler.is_compiling

Indicates whether a graph is executed/traced as part of torch.compile() or torch.export().

torch.compiler.is_dynamo_compiling

Indicates whether a graph is traced via TorchDynamo. It’s stricter than torch.compiler.is_compiling() flag, as it would only be set to True when TorchDynamo is used.

torch.compiler.is_exporting

Indicates whether a graph is traced via export. It’s stricter than torch.compiler.is_compiling() flag, as it would only be set to True when torch.export is used.

torch.compiler.disable#

torch.compiler.disable disables compilation on the decorated function frame and all the function frames recursively invoked from the decorated function frame.

TorchDynamo intercepts the execution of each Python function frame. So, suppose you have a code structure (image below) where the functionfn calls functionsa_fn andb_fn. Anda_fn callsaa_fn andab_fn. When you use the PyTorch eager mode rather thantorch.compile, these function frames run as is. Withtorch.compile, TorchDynamo intercepts each of these function frames (indicated by the green color):

Callstack diagram of different apis.

Let’s imagine, that functiona_fn is causing troubles withtorch.compile.And this is a non-critical portion of the model. You can usecompiler.disableon functiona_fn. As shown above, TorchDynamo will stop looking at framesoriginating from thea_fn call (white color indicates original Python behavior).

To skip compilation, you can decorate the offending function with@torch.compiler.disable.

You can also use the non-decorator syntax if you don’t want to change the sourcecodeHowever, we recommend that you avoid this style if possible. Here, you have totake care that all users of the original function are now using the patchedversion.

torch._dynamo.disallow_in_graph#

torch._dynamo.disallow_in_graph disallows an operator but not the functionto be present in the TorchDynamo extracted graph. Note that this is suitablefor operators and not general functions as in the case of_dynamo.disable.

Let’s imagine you compile your model with PyTorch. TorchDynamo is able toextract a graph, but then you see the downstream compiler failing. For example,the meta kernel is missing, or some Autograd dispatch key is set incorrectlyfor a particular operator. Then you can mark that operator asdisallow_in_graph, and TorchDynamo will cause a graph break and run thatoperator by using the PyTorch eager mode.

The catch is that you will have to find the corresponding Dynamo level operator,and not the ATen level operator. See more in the Limitations section of the doc.

Warning

torch._dynamo.disallow_in_graph is a global flag. If you are comparingdifferent backend compilers, you might have to callallow_in_graph forthe disallowed operator when switching to the other compiler.

torch.compiler.allow_in_graph#

torch.compiler.allow_in_graph is useful when the relevant function framehas some known hard-to-support TorchDynamo feature, such as hooks andautograd.Function, and you are confident that downstream PyTorch componentssuch as AOTAutograd can safely trace through the decorated function. When afunction is decorated withallow_in_graph, TorchDynamo treats it as ablack-box and puts it as is in the generated graph.

Warning

allow_in_graph skips TorchDynamo completely on the decorated functionomitting all TorchDynamo safety checks, including graph breaks, handlingclosures, and others. Useallow_in_graph with caution. PyTorch downstreamcomponents, such as AOTAutograd rely on TorchDynamo to handle complex Pythonfeatures, butallow_in_graph bypasses TorchDynamo. Usingallow_in_graphcould lead to soundness and hard-to-debug issues.

Limitations#

All the existing APIs are applied at the TorchDynamo level. Therefore, theseAPIs have visibility to only what TorchDynamo sees. This can lead to confusingscenarios.

For example,torch._dynamo.disallow_in_graph will not work for ATen operatorsbecause they are visible to AOT Autograd. For example,torch._dynamo.disallow_in_graph(torch.ops.aten.add) will not work in theabove example.