Rate this Page

torch.functional.lu#

torch.functional.lu(*args,**kwargs)[source]#

Computes the LU factorization of a matrix or batches of matricesA. Returns a tuple containing the LU factorization andpivots ofA. Pivoting is done ifpivot is set toTrue.

Warning

torch.lu() is deprecated in favor oftorch.linalg.lu_factor()andtorch.linalg.lu_factor_ex().torch.lu() will be removed in afuture PyTorch release.LU,pivots,info=torch.lu(A,compute_pivots) should be replaced with

LU,pivots=torch.linalg.lu_factor(A,compute_pivots)

LU,pivots,info=torch.lu(A,compute_pivots,get_infos=True) should be replaced with

LU,pivots,info=torch.linalg.lu_factor_ex(A,compute_pivots)

Note

  • The returned permutation matrix for every matrix in the batch isrepresented by a 1-indexed vector of sizemin(A.shape[-2],A.shape[-1]).pivots[i]==j represents that in thei-th step of the algorithm,thei-th row was permuted with thej-1-th row.

  • LU factorization withpivot =False is not availablefor CPU, and attempting to do so will throw an error. However,LU factorization withpivot =False is available forCUDA.

  • This function does not check if the factorization was successfulor not ifget_infos isTrue since the status of thefactorization is present in the third element of the return tuple.

  • In the case of batches of square matrices with size less or equalto 32 on a CUDA device, the LU factorization is repeated forsingular matrices due to the bug in the MAGMA library(see magma issue 13).

  • L,U, andP can be derived usingtorch.lu_unpack().

Warning

The gradients of this function will only be finite whenA is full rank.This is because the LU decomposition is just differentiable at full rank matrices.Furthermore, ifA is close to not being full rank,the gradient will be numerically unstable as it depends on the computation ofL1L^{-1} andU1U^{-1}.

Parameters
  • A (Tensor) – the tensor to factor of size(,m,n)(*, m, n)

  • pivot (bool,optional) – Whether to compute the LU decomposition with partial pivoting, or the regular LUdecomposition.pivot= False not supported on CPU. Default:True.

  • get_infos (bool,optional) – if set toTrue, returns an info IntTensor.Default:False

  • out (tuple,optional) – optional output tuple. Ifget_infos isTrue,then the elements in the tuple are Tensor, IntTensor,and IntTensor. Ifget_infos isFalse, then theelements in the tuple are Tensor, IntTensor. Default:None

Returns

A tuple of tensors containing

  • factorization (Tensor): the factorization of size(,m,n)(*, m, n)

  • pivots (IntTensor): the pivots of size(,min(m,n))(*, \text{min}(m, n)).pivots stores all the intermediate transpositions of rows.The final permutationperm could be reconstructed byapplyingswap(perm[i],perm[pivots[i]-1]) fori=0,...,pivots.size(-1)-1,whereperm is initially the identity permutation ofmm elements(essentially this is whattorch.lu_unpack() is doing).

  • infos (IntTensor,optional): ifget_infos isTrue, this is a tensor ofsize()(*) where non-zero values indicate whether factorization for the matrix oreach minibatch has succeeded or failed

Return type

(Tensor, IntTensor, IntTensor (optional))

Example:

>>>A=torch.randn(2,3,3)>>>A_LU,pivots=torch.lu(A)>>>A_LUtensor([[[ 1.3506,  2.5558, -0.0816],         [ 0.1684,  1.1551,  0.1940],         [ 0.1193,  0.6189, -0.5497]],        [[ 0.4526,  1.2526, -0.3285],         [-0.7988,  0.7175, -0.9701],         [ 0.2634, -0.9255, -0.3459]]])>>>pivotstensor([[ 3,  3,  3],        [ 3,  3,  3]], dtype=torch.int32)>>>A_LU,pivots,info=torch.lu(A,get_infos=True)>>>ifinfo.nonzero().size(0)==0:...print('LU factorization succeeded for all samples!')LU factorization succeeded for all samples!