Rate this Page

torch.qr#

torch.qr(input:Tensor,some:bool=True,*,out:Union[Tensor,Tuple[Tensor,...],List[Tensor],None])#

Computes the QR decomposition of a matrix or a batch of matricesinput,and returns a namedtuple (Q, R) of tensors such thatinput=QR\text{input} = Q RwithQQ being an orthogonal matrix or batch of orthogonal matrices andRR being an upper triangular matrix or batch of upper triangular matrices.

Ifsome isTrue, then this function returns the thin (reduced) QR factorization.Otherwise, ifsome isFalse, this function returns the complete QR factorization.

Warning

torch.qr() is deprecated in favor oftorch.linalg.qr()and will be removed in a future PyTorch release. The boolean parametersome has beenreplaced with a string parametermode.

Q,R=torch.qr(A) should be replaced with

Q,R=torch.linalg.qr(A)

Q,R=torch.qr(A,some=False) should be replaced with

Q,R=torch.linalg.qr(A,mode="complete")

Warning

If you plan to backpropagate through QR, note that the current backward implementationis only well-defined when the firstmin(input.size(1),input.size(2))\min(input.size(-1), input.size(-2))columns ofinput are linearly independent.This behavior will probably change once QR supports pivoting.

Note

This function uses LAPACK for CPU inputs and MAGMA for CUDA inputs,and may produce different (valid) decompositions on different device typesor different platforms.

Parameters
  • input (Tensor) – the input tensor of size(,m,n)(*, m, n) where* is zero or morebatch dimensions consisting of matrices of dimensionm×nm \times n.

  • some (bool,optional) –

    Set toTrue for reduced QR decomposition andFalse forcomplete QR decomposition. Ifk = min(m, n) then:

    • some=True : returns(Q, R) with dimensions (m, k), (k, n) (default)

    • 'some=False': returns(Q, R) with dimensions (m, m), (m, n)

Keyword Arguments

out (tuple,optional) – tuple ofQ andR tensors.The dimensions ofQ andR are detailed in the description ofsome above.

Example:

>>>a=torch.tensor([[12.,-51,4],[6,167,-68],[-4,24,-41]])>>>q,r=torch.qr(a)>>>qtensor([[-0.8571,  0.3943,  0.3314],        [-0.4286, -0.9029, -0.0343],        [ 0.2857, -0.1714,  0.9429]])>>>rtensor([[ -14.0000,  -21.0000,   14.0000],        [   0.0000, -175.0000,   70.0000],        [   0.0000,    0.0000,  -35.0000]])>>>torch.mm(q,r).round()tensor([[  12.,  -51.,    4.],        [   6.,  167.,  -68.],        [  -4.,   24.,  -41.]])>>>torch.mm(q.t(),q).round()tensor([[ 1.,  0.,  0.],        [ 0.,  1., -0.],        [ 0., -0.,  1.]])>>>a=torch.randn(3,4,5)>>>q,r=torch.qr(a,some=False)>>>torch.allclose(torch.matmul(q,r),a)True>>>torch.allclose(torch.matmul(q.mT,q),torch.eye(5))True