Rate this Page
★★★★★
torch.optim.adamw.adamw#
- torch.optim.adamw.adamw(params,grads,exp_avgs,exp_avg_sqs,max_exp_avg_sqs,state_steps,foreach=None,capturable=False,differentiable=False,fused=None,grad_scale=None,found_inf=None,has_complex=False,*,amsgrad,beta1,beta2,lr,weight_decay,eps,maximize)[source]#
Functional API that performs AdamW algorithm computation.
See
AdamWfor details.
On this page