Rate this Page

no_grad#

classtorch.no_grad(orig_func=None)[source]#

Context-manager that disables gradient calculation.

Disabling gradient calculation is useful for inference, when you are surethat you will not callTensor.backward(). It will reduce memoryconsumption for computations that would otherwise haverequires_grad=True.

In this mode, the result of every computation will haverequires_grad=False, even when the inputs haverequires_grad=True.There is an exception! All factory functions, or functions that createa new Tensor and take a requires_grad kwarg, will NOT be affected bythis mode.

This context manager is thread local; it will not affect computationin other threads.

Also functions as a decorator.

Note

No-grad is one of several mechanisms that can enable ordisable gradients locally seeLocally disabling gradient computation formore information on how they compare.

Note

This API does not apply toforward-mode AD.If you want to disable forward AD for a computation, you can unpackyour dual tensors.

Example::
>>>x=torch.tensor([1.],requires_grad=True)>>>withtorch.no_grad():...y=x*2>>>y.requires_gradFalse>>>@torch.no_grad()...defdoubler(x):...returnx*2>>>z=doubler(x)>>>z.requires_gradFalse>>>@torch.no_grad()...deftripler(x):...returnx*3>>>z=tripler(x)>>>z.requires_gradFalse>>># factory function exception>>>withtorch.no_grad():...a=torch.nn.Parameter(torch.rand(10))>>>a.requires_gradTrue