Rate this Page

torch.linalg.lu#

torch.linalg.lu(A,*,pivot=True,out=None)#

Computes the LU decomposition with partial pivoting of a matrix.

LettingK\mathbb{K} beR\mathbb{R} orC\mathbb{C},theLU decomposition with partial pivoting of a matrixAKm×nA \in \mathbb{K}^{m \times n} is defined as

wherek = min(m,n),PP is apermutation matrix,LL is lower triangular with ones on the diagonalandUU is upper triangular.

Ifpivot= False andA is on GPU, then theLU decomposition without pivoting is computed

Whenpivot= False, the returned matrixP will be empty.The LU decomposition without pivotingmay not exist if any of the principal minors ofA is singular.In this case, the output matrix may containinf orNaN.

Supports input of float, double, cfloat and cdouble dtypes.Also supports batches of matrices, and ifA is a batch of matrices thenthe output has the same batch dimensions.

See also

torch.linalg.solve() solves a system of linear equations using the LU decompositionwith partial pivoting.

Warning

The LU decomposition is almost never unique, as often there are different permutationmatrices that can yield different LU decompositions.As such, different platforms, like SciPy, or inputs on different devices,may produce different valid decompositions.

Warning

Gradient computations are only supported if the input matrix is full-rank.If this condition is not met, no error will be thrown, but the gradientmay not be finite.This is because the LU decomposition with pivoting is not differentiable at these points.

Parameters
  • A (Tensor) – tensor of shape(*, m, n) where* is zero or more batch dimensions.

  • pivot (bool,optional) – Controls whether to compute the LU decomposition with partial pivoting orno pivoting. Default:True.

Keyword Arguments

out (tuple,optional) – output tuple of three tensors. Ignored ifNone. Default:None.

Returns

A named tuple(P, L, U).

Examples:

>>>A=torch.randn(3,2)>>>P,L,U=torch.linalg.lu(A)>>>Ptensor([[0., 1., 0.],        [0., 0., 1.],        [1., 0., 0.]])>>>Ltensor([[1.0000, 0.0000],        [0.5007, 1.0000],        [0.0633, 0.9755]])>>>Utensor([[0.3771, 0.0489],        [0.0000, 0.9644]])>>>torch.dist(A,P@L@U)tensor(5.9605e-08)>>>A=torch.randn(2,5,7,device="cuda")>>>P,L,U=torch.linalg.lu(A,pivot=False)>>>Ptensor([], device='cuda:0')>>>torch.dist(A,L@U)tensor(1.0376e-06, device='cuda:0')