Rate this Page

torch.linalg.pinv#

torch.linalg.pinv(A,*,atol=None,rtol=None,hermitian=False,out=None)Tensor#

Computes the pseudoinverse (Moore-Penrose inverse) of a matrix.

The pseudoinverse may bedefined algebraicallybut it is more computationally convenient to understand itthrough the SVD

Supports input of float, double, cfloat and cdouble dtypes.Also supports batches of matrices, and ifA is a batch of matrices thenthe output has the same batch dimensions.

Ifhermitian= True,A is assumed to be Hermitian if complex orsymmetric if real, but this is not checked internally. Instead, just the lowertriangular part of the matrix is used in the computations.

The singular values (or the norm of the eigenvalues whenhermitian= True)that are belowmax(atol,σ1rtol)\max(\text{atol}, \sigma_1 \cdot \text{rtol}) threshold aretreated as zero and discarded in the computation,whereσ1\sigma_1 is the largest singular value (or eigenvalue).

Ifrtol is not specified andA is a matrix of dimensions(m, n),the relative tolerance is set to bertol=max(m,n)ε\text{rtol} = \max(m, n) \varepsilonandε\varepsilon is the epsilon value for the dtype ofA (seefinfo).Ifrtol is not specified andatol is specified to be larger than zero thenrtol is set to zero.

Ifatol orrtol is atorch.Tensor, its shape must be broadcastable to thatof the singular values ofA as returned bytorch.linalg.svd().

Note

This function usestorch.linalg.svd() ifhermitian= False andtorch.linalg.eigh() ifhermitian= True.For CUDA inputs, this function synchronizes that device with the CPU.

Note

Consider usingtorch.linalg.lstsq() if possible for multiplying a matrix on the left bythe pseudoinverse, as:

torch.linalg.lstsq(A,B).solution==A.pinv()@B

It is always preferred to uselstsq() when possible, as it is faster and morenumerically stable than computing the pseudoinverse explicitly.

Note

This function has NumPy compatible variantlinalg.pinv(A, rcond, hermitian=False).However, use of the positional argumentrcond is deprecated in favor ofrtol.

Warning

This function uses internallytorch.linalg.svd() (ortorch.linalg.eigh()whenhermitian= True), so its derivative has the same problems as those of thesefunctions. See the warnings intorch.linalg.svd() andtorch.linalg.eigh() formore details.

See also

torch.linalg.inv() computes the inverse of a square matrix.

torch.linalg.lstsq() computesA.pinv() @B with anumerically stable algorithm.

Parameters
  • A (Tensor) – tensor of shape(*, m, n) where* is zero or more batch dimensions.

  • rcond (float,Tensor,optional) – [NumPy Compat]. Alias forrtol. Default:None.

Keyword Arguments
  • atol (float,Tensor,optional) – the absolute tolerance value. WhenNone it’s considered to be zero.Default:None.

  • rtol (float,Tensor,optional) – the relative tolerance value. See above for the value it takes whenNone.Default:None.

  • hermitian (bool,optional) – indicates whetherA is Hermitian if complexor symmetric if real. Default:False.

  • out (Tensor,optional) – output tensor. Ignored ifNone. Default:None.

Examples:

>>>A=torch.randn(3,5)>>>Atensor([[ 0.5495,  0.0979, -1.4092, -0.1128,  0.4132],        [-1.1143, -0.3662,  0.3042,  1.6374, -0.9294],        [-0.3269, -0.5745, -0.0382, -0.5922, -0.6759]])>>>torch.linalg.pinv(A)tensor([[ 0.0600, -0.1933, -0.2090],        [-0.0903, -0.0817, -0.4752],        [-0.7124, -0.1631, -0.2272],        [ 0.1356,  0.3933, -0.5023],        [-0.0308, -0.1725, -0.5216]])>>>A=torch.randn(2,6,3)>>>Apinv=torch.linalg.pinv(A)>>>torch.dist(Apinv@A,torch.eye(3))tensor(8.5633e-07)>>>A=torch.randn(3,3,dtype=torch.complex64)>>>A=A+A.T.conj()# creates a Hermitian matrix>>>Apinv=torch.linalg.pinv(A,hermitian=True)>>>torch.dist(Apinv@A,torch.eye(3))tensor(1.0830e-06)