Rate this Page

L1Loss#

classtorch.nn.modules.loss.L1Loss(size_average=None,reduce=None,reduction='mean')[source]#

Creates a criterion that measures the mean absolute error (MAE) between each element inthe inputxx and targetyy.

The unreduced (i.e. withreduction set to'none') loss can be described as:

(x,y)=L={l1,,lN},ln=xnyn,\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quadl_n = \left| x_n - y_n \right|,

whereNN is the batch size. Ifreduction is not'none'(default'mean'), then:

(x,y)={mean(L),if reduction=‘mean’;sum(L),if reduction=‘sum’.\ell(x, y) =\begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.}\end{cases}

xx andyy are tensors of arbitrary shapes with a totalofNN elements each.

The sum operation still operates over all the elements, and divides byNN.

The division byNN can be avoided if one setsreduction='sum'.

Supports real-valued and complex-valued inputs.

Parameters
  • size_average (bool,optional) – Deprecated (seereduction). By default,the losses are averaged over each loss element in the batch. Note that forsome losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignoredwhenreduce isFalse. Default:True

  • reduce (bool,optional) – Deprecated (seereduction). By default, thelosses are averaged or summed over observations for each minibatch dependingonsize_average. Whenreduce isFalse, returns a loss perbatch element instead and ignoressize_average. Default:True

  • reduction (str,optional) – Specifies the reduction to apply to the output:'none' |'mean' |'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number ofelements in the output,'sum': the output will be summed. Note:size_averageandreduce are in the process of being deprecated, and in the meantime,specifying either of those two args will overridereduction. Default:'mean'

Shape:
  • Input:()(*), where* means any number of dimensions.

  • Target:()(*), same shape as the input.

  • Output: scalar. Ifreduction is'none', then()(*), same shape as the input.

Examples

>>>loss=nn.L1Loss()>>>input=torch.randn(3,5,requires_grad=True)>>>target=torch.randn(3,5)>>>output=loss(input,target)>>>output.backward()
forward(input,target)[source]#

Runs the forward pass.

Return type

Tensor