Rate this Page

torch.optim.Optimizer.zero_grad#

Optimizer.zero_grad(set_to_none=True)[source]#

Reset the gradients of all optimizedtorch.Tensor s.

Parameters:

set_to_none (bool,optional) –

Instead of setting to zero, set the grads to None. Default:True

This will in general have lower memory footprint, and can modestly improve performance.However, it changes certain behaviors. For example:

  1. When the user tries to access a gradient and perform manual ops on it,a None attribute or a Tensor full of 0s will behave differently.

  2. If the user requestszero_grad(set_to_none=True) followed by a backward pass,.gradsare guaranteed to be None for params that did not receive a gradient.

  3. torch.optim optimizers have a different behavior if the gradient is 0 or None(in one case it does the step with a gradient of 0 and in the other it skipsthe step altogether).