Movatterモバイル変換


[0]ホーム

URL:


GitHub

Linear Algebra

In addition to (and as part of) its support for multi-dimensional arrays, Julia provides native implementations of many common and useful linear algebra operations which can be loaded withusing LinearAlgebra. Basic operations, such astr,det, andinv are all supported:

julia> A = [1 2 3; 4 1 6; 7 8 1]3×3 Matrix{Int64}: 1  2  3 4  1  6 7  8  1julia> tr(A)3julia> det(A)104.0julia> inv(A)3×3 Matrix{Float64}: -0.451923   0.211538    0.0865385  0.365385  -0.192308    0.0576923  0.240385   0.0576923  -0.0673077

As well as other useful operations, such as finding eigenvalues or eigenvectors:

julia> A = [-4. -17.; 2. 2.]2×2 Matrix{Float64}: -4.0  -17.0  2.0    2.0julia> eigvals(A)2-element Vector{ComplexF64}: -1.0 - 5.0im -1.0 + 5.0imjulia> eigvecs(A)2×2 Matrix{ComplexF64}:  0.945905-0.0im        0.945905+0.0im -0.166924+0.278207im  -0.166924-0.278207im

In addition, Julia provides manyfactorizations which can be used to speed up problems such as linear solve or matrix exponentiation by pre-factorizing a matrix into a form more amenable (for performance or memory reasons) to the problem. See the documentation onfactorize for more information. As an example:

julia> A = [1.5 2 -4; 3 -1 -6; -10 2.3 4]3×3 Matrix{Float64}:   1.5   2.0  -4.0   3.0  -1.0  -6.0 -10.0   2.3   4.0julia> factorize(A)LU{Float64, Matrix{Float64}, Vector{Int64}}L factor:3×3 Matrix{Float64}:  1.0    0.0       0.0 -0.15   1.0       0.0 -0.3   -0.132196  1.0U factor:3×3 Matrix{Float64}: -10.0  2.3     4.0   0.0  2.345  -3.4   0.0  0.0    -5.24947

SinceA is not Hermitian, symmetric, triangular, tridiagonal, or bidiagonal, an LU factorization may be the best we can do. Compare with:

julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]3×3 Matrix{Float64}:  1.5   2.0  -4.0  2.0  -1.0  -3.0 -4.0  -3.0   5.0julia> factorize(B)BunchKaufman{Float64, Matrix{Float64}, Vector{Int64}}D factor:3×3 Tridiagonal{Float64, Vector{Float64}}: -1.64286   0.0   ⋅  0.0      -2.8  0.0   ⋅        0.0  5.0U factor:3×3 UnitUpperTriangular{Float64, Matrix{Float64}}: 1.0  0.142857  -0.8  ⋅   1.0       -0.6  ⋅    ⋅         1.0permutation:3-element Vector{Int64}: 1 2 3

Here, Julia was able to detect thatB is in fact symmetric, and used a more appropriate factorization. Often it's possible to write more efficient code for a matrix that is known to have certain properties e.g. it is symmetric, or tridiagonal. Julia provides some special types so that you can "tag" matrices as having these properties. For instance:

julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]3×3 Matrix{Float64}:  1.5   2.0  -4.0  2.0  -1.0  -3.0 -4.0  -3.0   5.0julia> sB = Symmetric(B)3×3 Symmetric{Float64, Matrix{Float64}}:  1.5   2.0  -4.0  2.0  -1.0  -3.0 -4.0  -3.0   5.0

sB has been tagged as a matrix that's (real) symmetric, so for later operations we might perform on it, such as eigenfactorization or computing matrix-vector products, efficiencies can be found by only referencing half of it. For example:

julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]3×3 Matrix{Float64}:  1.5   2.0  -4.0  2.0  -1.0  -3.0 -4.0  -3.0   5.0julia> sB = Symmetric(B)3×3 Symmetric{Float64, Matrix{Float64}}:  1.5   2.0  -4.0  2.0  -1.0  -3.0 -4.0  -3.0   5.0julia> x = [1; 2; 3]3-element Vector{Int64}: 1 2 3julia> sB\x3-element Vector{Float64}: -1.7391304347826084 -1.1086956521739126 -1.4565217391304346

The\ operation here performs the linear solution. The left-division operator is pretty powerful and it's easy to write compact, readable code that is flexible enough to solve all sorts of systems of linear equations.

Special matrices

Matrices with special symmetries and structures arise often in linear algebra and are frequently associated with various matrix factorizations. Julia features a rich collection of special matrix types, which allow for fast computation with specialized routines that are specially developed for particular matrix types.

The following tables summarize the types of special matrices that have been implemented in Julia, as well as whether hooks to various optimized methods for them in LAPACK are available.

TypeDescription
SymmetricSymmetric matrix
HermitianHermitian matrix
UpperTriangularUppertriangular matrix
UnitUpperTriangularUppertriangular matrix with unit diagonal
LowerTriangularLowertriangular matrix
UnitLowerTriangularLowertriangular matrix with unit diagonal
UpperHessenbergUpperHessenberg matrix
TridiagonalTridiagonal matrix
SymTridiagonalSymmetric tridiagonal matrix
BidiagonalUpper/lowerbidiagonal matrix
DiagonalDiagonal matrix
UniformScalingUniform scaling operator

Elementary operations

Matrix type+-*\Other functions with optimized methods
SymmetricMVinv,sqrt,cbrt,exp
HermitianMVinv,sqrt,cbrt,exp
UpperTriangularMVMVinv,det,logdet
UnitUpperTriangularMVMVinv,det,logdet
LowerTriangularMVMVinv,det,logdet
UnitLowerTriangularMVMVinv,det,logdet
UpperHessenbergMMinv,det
SymTridiagonalMMMSMVeigmax,eigmin
TridiagonalMMMSMV
BidiagonalMMMSMV
DiagonalMMMVMVinv,det,logdet,/
UniformScalingMMMVSMVS/

Legend:

KeyDescription
M (matrix)An optimized method for matrix-matrix operations is available
V (vector)An optimized method for matrix-vector operations is available
S (scalar)An optimized method for matrix-scalar operations is available

Matrix factorizations

Matrix typeLAPACKeigeneigvalseigvecssvdsvdvals
SymmetricSYARI
HermitianHEARI
UpperTriangularTRAAA
UnitUpperTriangularTRAAA
LowerTriangularTRAAA
UnitLowerTriangularTRAAA
SymTridiagonalSTAARIAV
TridiagonalGT
BidiagonalBDAA
DiagonalDIA

Legend:

KeyDescriptionExample
A (all)An optimized method to find all the characteristic values and/or vectors is availablee.g.eigvals(M)
R (range)An optimized method to find theilth through theihth characteristic values are availableeigvals(M, il, ih)
I (interval)An optimized method to find the characteristic values in the interval [vl,vh] is availableeigvals(M, vl, vh)
V (vectors)An optimized method to find the characteristic vectors corresponding to the characteristic valuesx=[x1, x2,...] is availableeigvecs(M, x)

The uniform scaling operator

AUniformScaling operator represents a scalar times the identity operator,λ*I. The identity operatorI is defined as a constant and is an instance ofUniformScaling. The size of these operators are generic and match the other matrix in the binary operations+,-,* and\. ForA+I andA-I this means thatA must be square. Multiplication with the identity operatorI is a noop (except for checking that the scaling factor is one) and therefore almost without overhead.

To see theUniformScaling operator in action:

julia> U = UniformScaling(2);julia> a = [1 2; 3 4]2×2 Matrix{Int64}: 1  2 3  4julia> a + U2×2 Matrix{Int64}: 3  2 3  6julia> a * U2×2 Matrix{Int64}: 2  4 6  8julia> [a U]2×4 Matrix{Int64}: 1  2  2  0 3  4  0  2julia> b = [1 2 3; 4 5 6]2×3 Matrix{Int64}: 1  2  3 4  5  6julia> b - UERROR: DimensionMismatch: matrix is not square: dimensions are (2, 3)Stacktrace:[...]

If you need to solve many systems of the form(A+μI)x = b for the sameA and differentμ, it might be beneficial to first compute the Hessenberg factorizationF ofA via thehessenberg function. GivenF, Julia employs an efficient algorithm for(F+μ*I) \ b (equivalent to(A+μ*I)x \ b) and related operations like determinants.

Matrix factorizations

Matrix factorizations (a.k.a. matrix decompositions) compute the factorization of a matrix into a product of matrices, and are one of the central concepts in (numerical) linear algebra.

The following table summarizes the types of matrix factorizations that have been implemented in Julia. Details of their associated methods can be found in theStandard functions section of the Linear Algebra documentation.

TypeDescription
BunchKaufmanBunch-Kaufman factorization
CholeskyCholesky factorization
CholeskyPivotedPivoted Cholesky factorization
LDLtLDL(T) factorization
LULU factorization
QRQR factorization
QRCompactWYCompact WY form of the QR factorization
QRPivotedPivotedQR factorization
LQQR factorization oftranspose(A)
HessenbergHessenberg decomposition
EigenSpectral decomposition
GeneralizedEigenGeneralized spectral decomposition
SVDSingular value decomposition
GeneralizedSVDGeneralized SVD
SchurSchur decomposition
GeneralizedSchurGeneralized Schur decomposition

Adjoints and transposes ofFactorization objects are lazily wrapped inAdjointFactorization andTransposeFactorization objects, respectively. Generically, transpose of realFactorizations are wrapped asAdjointFactorization.

Orthogonal matrices (AbstractQ)

Some matrix factorizations generate orthogonal/unitary "matrix" factors. These factorizations include QR-related factorizations obtained from calls toqr, i.e.,QR,QRCompactWY andQRPivoted, the Hessenberg factorization obtained from calls tohessenberg, and the LQ factorization obtained fromlq. While these orthogonal/unitary factors admit a matrix representation, their internal representation is, for performance and memory reasons, different. Hence, they should be rather viewed as matrix-backed, function-based linear operators. In particular, reading, for instance, a column of its matrix representation requires running "matrix"-vector multiplication code, rather than simply reading out data from memory (possibly filling parts of the vector with structural zeros). Another clear distinction from other, non-triangular matrix types is that the underlying multiplication code allows for in-place modification during multiplication. Furthermore, objects of specificAbstractQ subtypes as those created viaqr,hessenberg andlq can behave like a square or a rectangular matrix depending on context:

julia> using LinearAlgebrajulia> Q = qr(rand(3,2)).Q3×3 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}julia> Matrix(Q)3×2 Matrix{Float64}: -0.320597   0.865734 -0.765834  -0.475694 -0.557419   0.155628julia> Q*I3×3 Matrix{Float64}: -0.320597   0.865734  -0.384346 -0.765834  -0.475694  -0.432683 -0.557419   0.155628   0.815514julia> Q*ones(2)3-element Vector{Float64}:  0.5451367118802273 -1.241527373086654 -0.40179067589600226julia> Q*ones(3)3-element Vector{Float64}:  0.16079054743832022 -1.674209978965636  0.41372375588835797julia> ones(1,2) * Q'1×3 Matrix{Float64}: 0.545137  -1.24153  -0.401791julia> ones(1,3) * Q'1×3 Matrix{Float64}: 0.160791  -1.67421  0.413724

Due to this distinction from dense or structured matrices, the abstractAbstractQ type does not subtypeAbstractMatrix, but instead has its own type hierarchy. Custom types that subtypeAbstractQ can rely on generic fallbacks if the following interface is satisfied. For example, for

struct MyQ{T} <: LinearAlgebra.AbstractQ{T}    # required fieldsend

provide overloads for

Base.size(Q::MyQ) # size of corresponding square matrix representationBase.convert(::Type{AbstractQ{T}}, Q::MyQ) # eltype promotion [optional]LinearAlgebra.lmul!(Q::MyQ, x::AbstractVecOrMat) # left-multiplicationLinearAlgebra.rmul!(A::AbstractMatrix, Q::MyQ) # right-multiplication

Ifeltype promotion is not of interest, theconvert method is unnecessary, since by defaultconvert(::Type{AbstractQ{T}}, Q::AbstractQ{T}) returnsQ itself. Adjoints ofAbstractQ-typed objects are lazily wrapped in anAdjointQ wrapper type, which requires its ownLinearAlgebra.lmul! andLinearAlgebra.rmul! methods. Given this set of methods, anyQ::MyQ can be used like a matrix, preferably in a multiplicative context: multiplication via* with scalars, vectors and matrices from left and right, obtaining a matrix representation ofQ viaMatrix(Q) (orQ*I) and indexing into the matrix representation all work. In contrast, addition and subtraction as well as more generally broadcasting over elements in the matrix representation fail because that would be highly inefficient. For such use cases, consider computing the matrix representation up front and cache it for future reuse.

Pivoting Strategies

Several of Julia'smatrix factorizations supportpivoting, which can be used to improve their numerical stability. In fact, some matrix factorizations, such as the LU factorization, may fail without pivoting.

In pivoting, first, apivot element with good numerical properties is chosen based on a pivoting strategy. Next, the rows and columns of the original matrix are permuted to bring the chosen element in place for subsequent computation. Furthermore, the process is repeated for each stage of the factorization.

Consequently, besides the conventional matrix factors, the outputs of pivoted factorization schemes also include permutation matrices.

In the following, the pivoting strategies implemented in Julia are briefly described. Note that not all matrix factorizations may support them. Consult the documentation of the respectivematrix factorization for details on the supported pivoting strategies.

See alsoLinearAlgebra.ZeroPivotException.

LinearAlgebra.NoPivotType
NoPivot

Pivoting is not performed. Matrix factorizations such as the LU factorization may fail without pivoting, and may also be numerically unstable for floating-point matrices in the face of roundoff error. This pivot strategy is mainly useful for pedagogical purposes.

LinearAlgebra.RowNonZeroType
RowNonZero

First non-zero element in the remaining rows is chosen as the pivot element.

Beware that for floating-point matrices, the resulting LU algorithm is numerically unstable — this strategy is mainly useful for comparison to hand calculations (which typically use this strategy) or for other algebraic types (e.g. rational numbers) not susceptible to roundoff errors. Otherwise, the defaultRowMaximum pivoting strategy should be generally preferred in Gaussian elimination.

Note that theelement type of the matrix must admit aniszero method.

LinearAlgebra.RowMaximumType
RowMaximum

The maximum-magnitude element in the remaining rows is chosen as the pivot element. This is the default strategy for LU factorization of floating-point matrices, and is sometimes referred to as the "partial pivoting" algorithm.

Note that theelement type of the matrix must admit anabs method, whose result type must admit a< method.

LinearAlgebra.ColumnNormType
ColumnNorm

The column with the maximum norm is used for subsequent computation. This is used for pivoted QR factorization.

Note that theelement type of the matrix must admitnorm andabs methods, whose respective result types must admit a< method.

Standard functions

Linear algebra functions in Julia are largely implemented by calling functions fromLAPACK. Sparse matrix factorizations call functions fromSuiteSparse. Other sparse solvers are available as Julia packages.

Base.:*Method
*(A::AbstractMatrix, B::AbstractMatrix)

Matrix multiplication.

Examples

julia> [1 1; 0 1] * [1 0; 1 1]2×2 Matrix{Int64}: 2  1 1  1
Base.:*Method
*(A, B::AbstractMatrix, C)A * B * C * D

Chained multiplication of 3 or 4 matrices is done in the most efficient sequence, based on the sizes of the arrays. That is, the number of scalar multiplications needed for(A * B) * C (with 3 dense matrices) is compared to that forA * (B * C) to choose which of these to execute.

If the last factor is a vector, or the first a transposed vector, then it is efficient to deal with these first. In particularx' * B * y means(x' * B) * y for an ordinary column-majorB::Matrix. Unlikedot(x, B, y), this allocates an intermediate array.

If the first or last factor is a number, this will be fused with the matrix multiplication, using 5-argmul!.

See alsomuladd,dot.

Julia 1.7

These optimisations require at least Julia 1.7.

Base.:\Method
\(A, B)

Matrix division using a polyalgorithm. For input matricesA andB, the resultX is such thatA*X == B whenA is square. The solver that is used depends upon the structure ofA. IfA is upper or lower triangular (or diagonal), no factorization ofA is required and the system is solved with either forward or backward substitution. For non-triangular square matrices, an LU factorization is used.

For rectangularA the result is the minimum-norm least squares solution computed by a pivoted QR factorization ofA and a rank estimate ofA based on the R factor.

WhenA is sparse, a similar polyalgorithm is used. For indefinite matrices, theLDLt factorization does not use pivoting during the numerical factorization and therefore the procedure can fail even for invertible matrices.

See also:factorize,pinv.

Examples

julia> A = [1 0; 1 -2]; B = [32; -4];julia> X = A \ B2-element Vector{Float64}: 32.0 18.0julia> A * X == Btrue
Base.:/Method
A / B

Matrix right-division:A / B is equivalent to(B' \ A')' where\ is the left-division operator. For square matrices, the resultX is such thatA == X*B.

See also:rdiv!.

Examples

julia> A = Float64[1 4 5; 3 9 2]; B = Float64[1 4 2; 3 4 2; 8 7 1];julia> X = A / B2×3 Matrix{Float64}: -0.65   3.75  -1.2  3.25  -2.75   1.0julia> isapprox(A, X*B)truejulia> isapprox(X, A*pinv(B))true
LinearAlgebra.SingularExceptionType
SingularException

Exception thrown when the input matrix has one or more zero-valued eigenvalues, and is not invertible. A linear solve involving such a matrix cannot be computed. Theinfo field indicates the location of (one of) the singular value(s).

LinearAlgebra.PosDefExceptionType
PosDefException

Exception thrown when the input matrix was notpositive definite. Some linear algebra functions and factorizations are only applicable to positive definite matrices. Theinfo field indicates the location of (one of) the eigenvalue(s) which is (are) less than/equal to 0.

LinearAlgebra.ZeroPivotExceptionType
ZeroPivotException <: Exception

Exception thrown when a matrix factorization/solve encounters a zero in a pivot (diagonal) position and cannot proceed. This maynot mean that the matrix is singular: it may be fruitful to switch to a different factorization such as pivoted LU that can re-order variables to eliminate spurious zero pivots. Theinfo field indicates the location of (one of) the zero pivot(s).

LinearAlgebra.RankDeficientExceptionType
RankDeficientException

Exception thrown when the input matrix isrank deficient. Some linear algebra functions, such as the Cholesky decomposition, are only applicable to matrices that are not rank deficient. Theinfo field indicates the computed rank of the matrix.

LinearAlgebra.LAPACKExceptionType
LAPACKException

Generic LAPACK exception thrown either during direct calls to theLAPACK functions or during calls to other functions that use the LAPACK functions internally but lack specialized error handling. Theinfo field contains additional information on the underlying error and depends on the LAPACK function that was invoked.

LinearAlgebra.dotFunction
dot(x, y)x ⋅ y

Compute the dot product between two vectors. For complex vectors, the first vector is conjugated.

dot also works on arbitrary iterable objects, including arrays of any dimension, as long asdot is defined on the elements.

dot is semantically equivalent tosum(dot(vx,vy) for (vx,vy) in zip(x, y)), with the added restriction that the arguments must have equal lengths.

x ⋅ y (where can be typed by tab-completing\cdot in the REPL) is a synonym fordot(x, y).

Examples

julia> dot([1; 1], [2; 3])5julia> dot([im; im], [1; 1])0 - 2imjulia> dot(1:5, 2:6)70julia> x = fill(2., (5,5));julia> y = fill(3., (5,5));julia> dot(x, y)150.0
LinearAlgebra.dotMethod
dot(x, A, y)

Compute the generalized dot productdot(x, A*y) between two vectorsx andy, without storing the intermediate result ofA*y. As for the two-argumentdot(_,_), this acts recursively. Moreover, for complex vectors, the first vector is conjugated.

Julia 1.4

Three-argumentdot requires at least Julia 1.4.

Examples

julia> dot([1; 1], [1 2; 3 4], [2; 3])26julia> dot(1:5, reshape(1:25, 5, 5), 2:6)4850julia> ⋅(1:5, reshape(1:25, 5, 5), 2:6) == dot(1:5, reshape(1:25, 5, 5), 2:6)true
LinearAlgebra.crossFunction
cross(x, y)×(x,y)

Compute the cross product of two 3-vectors.

Examples

julia> a = [0;1;0]3-element Vector{Int64}: 0 1 0julia> b = [0;0;1]3-element Vector{Int64}: 0 0 1julia> cross(a,b)3-element Vector{Int64}: 1 0 0
LinearAlgebra.axpy!Function
axpy!(α, x::AbstractArray, y::AbstractArray)

Overwritey withx * α + y and returny. Ifx andy have the same axes, it's equivalent withy .+= x .* a.

Examples

julia> x = [1; 2; 3];julia> y = [4; 5; 6];julia> axpy!(2, x, y)3-element Vector{Int64}:  6  9 12
LinearAlgebra.axpby!Function
axpby!(α, x::AbstractArray, β, y::AbstractArray)

Overwritey withx * α + y * β and returny. Ifx andy have the same axes, it's equivalent withy .= x .* a .+ y .* β.

Examples

julia> x = [1; 2; 3];julia> y = [4; 5; 6];julia> axpby!(2, x, 2, y)3-element Vector{Int64}: 10 14 18
LinearAlgebra.rotate!Function
rotate!(x, y, c, s)

Overwritex withc*x + s*y andy with-conj(s)*x + c*y. Returnsx andy.

Julia 1.5

rotate! requires at least Julia 1.5.

LinearAlgebra.reflect!Function
reflect!(x, y, c, s)

Overwritex withc*x + s*y andy withconj(s)*x - c*y. Returnsx andy.

Julia 1.5

reflect! requires at least Julia 1.5.

LinearAlgebra.factorizeFunction
factorize(A)

Compute a convenient factorization ofA, based upon the type of the input matrix.factorize checksA to see if it is symmetric/triangular/etc. ifA is passed as a generic matrix.factorize checks every element ofA to verify/rule out each property. It will short-circuit as soon as it can rule out symmetry/triangular structure. The return value can be reused for efficient solving of multiple systems. For example:A=factorize(A); x=A\b; y=A\C.

Properties ofAtype of factorization
Positive-definiteCholesky (seecholesky)
Dense Symmetric/HermitianBunch-Kaufman (seebunchkaufman)
Sparse Symmetric/HermitianLDLt (seeldlt)
TriangularTriangular
DiagonalDiagonal
BidiagonalBidiagonal
TridiagonalLU (seelu)
Symmetric real tridiagonalLDLt (seeldlt)
General squareLU (seelu)
General non-squareQR (seeqr)

Iffactorize is called on a Hermitian positive-definite matrix, for instance, thenfactorize will return a Cholesky factorization.

Examples

julia> A = Array(Bidiagonal(fill(1.0, (5, 5)), :U))5×5 Matrix{Float64}: 1.0  1.0  0.0  0.0  0.0 0.0  1.0  1.0  0.0  0.0 0.0  0.0  1.0  1.0  0.0 0.0  0.0  0.0  1.0  1.0 0.0  0.0  0.0  0.0  1.0julia> factorize(A) # factorize will check to see that A is already factorized5×5 Bidiagonal{Float64, Vector{Float64}}: 1.0  1.0   ⋅    ⋅    ⋅  ⋅   1.0  1.0   ⋅    ⋅  ⋅    ⋅   1.0  1.0   ⋅  ⋅    ⋅    ⋅   1.0  1.0  ⋅    ⋅    ⋅    ⋅   1.0

This returns a5×5 Bidiagonal{Float64}, which can now be passed to other linear algebra functions (e.g. eigensolvers) which will use specialized methods forBidiagonal types.

LinearAlgebra.DiagonalType
Diagonal(V::AbstractVector)

Construct a lazy matrix withV as its diagonal.

See alsoUniformScaling for the lazy identity matrixI,diagm to make a dense matrix, anddiag to extract diagonal elements.

Examples

julia> d = Diagonal([1, 10, 100])3×3 Diagonal{Int64, Vector{Int64}}: 1   ⋅    ⋅ ⋅  10    ⋅ ⋅   ⋅  100julia> diagm([7, 13])2×2 Matrix{Int64}: 7   0 0  13julia> ans + I2×2 Matrix{Int64}: 8   0 0  14julia> I(2)2×2 Diagonal{Bool, Vector{Bool}}: 1  ⋅ ⋅  1
Note

A one-column matrix is not treated like a vector, but instead calls the methodDiagonal(A::AbstractMatrix) which extracts 1-elementdiag(A):

julia> A = transpose([7.0 13.0])2×1 transpose(::Matrix{Float64}) with eltype Float64:  7.0 13.0julia> Diagonal(A)1×1 Diagonal{Float64, Vector{Float64}}: 7.0
Diagonal(A::AbstractMatrix)

Construct a matrix from the principal diagonal ofA. The input matrixA may be rectangular, but the output will be square.

Examples

julia> A = [1 2; 3 4]2×2 Matrix{Int64}: 1  2 3  4julia> D = Diagonal(A)2×2 Diagonal{Int64, Vector{Int64}}: 1  ⋅ ⋅  4julia> A = [1 2 3; 4 5 6]2×3 Matrix{Int64}: 1  2  3 4  5  6julia> Diagonal(A)2×2 Diagonal{Int64, Vector{Int64}}: 1  ⋅ ⋅  5
Diagonal{T}(undef, n)

Construct an uninitializedDiagonal{T} of lengthn. Seeundef.

LinearAlgebra.BidiagonalType
Bidiagonal(dv::V, ev::V, uplo::Symbol) where V <: AbstractVector

Constructs an upper (uplo=:U) or lower (uplo=:L) bidiagonal matrix using the given diagonal (dv) and off-diagonal (ev) vectors. The result is of typeBidiagonal and provides efficient specialized linear solvers, but may be converted into a regular matrix withconvert(Array, _) (orArray(_) for short). The length ofev must be one less than the length ofdv.

Examples

julia> dv = [1, 2, 3, 4]4-element Vector{Int64}: 1 2 3 4julia> ev = [7, 8, 9]3-element Vector{Int64}: 7 8 9julia> Bu = Bidiagonal(dv, ev, :U) # ev is on the first superdiagonal4×4 Bidiagonal{Int64, Vector{Int64}}: 1  7  ⋅  ⋅ ⋅  2  8  ⋅ ⋅  ⋅  3  9 ⋅  ⋅  ⋅  4julia> Bl = Bidiagonal(dv, ev, :L) # ev is on the first subdiagonal4×4 Bidiagonal{Int64, Vector{Int64}}: 1  ⋅  ⋅  ⋅ 7  2  ⋅  ⋅ ⋅  8  3  ⋅ ⋅  ⋅  9  4
Bidiagonal(A, uplo::Symbol)

Construct aBidiagonal matrix from the main diagonal ofA and its first super- (ifuplo=:U) or sub-diagonal (ifuplo=:L).

Examples

julia> A = [1 1 1 1; 2 2 2 2; 3 3 3 3; 4 4 4 4]4×4 Matrix{Int64}: 1  1  1  1 2  2  2  2 3  3  3  3 4  4  4  4julia> Bidiagonal(A, :U) # contains the main diagonal and first superdiagonal of A4×4 Bidiagonal{Int64, Vector{Int64}}: 1  1  ⋅  ⋅ ⋅  2  2  ⋅ ⋅  ⋅  3  3 ⋅  ⋅  ⋅  4julia> Bidiagonal(A, :L) # contains the main diagonal and first subdiagonal of A4×4 Bidiagonal{Int64, Vector{Int64}}: 1  ⋅  ⋅  ⋅ 2  2  ⋅  ⋅ ⋅  3  3  ⋅ ⋅  ⋅  4  4
LinearAlgebra.SymTridiagonalType
SymTridiagonal(dv::V, ev::V) where V <: AbstractVector

Construct a symmetric tridiagonal matrix from the diagonal (dv) and first sub/super-diagonal (ev), respectively. The result is of typeSymTridiagonal and provides efficient specialized eigensolvers, but may be converted into a regular matrix withconvert(Array, _) (orArray(_) for short).

ForSymTridiagonal block matrices, the elements ofdv are symmetrized. The argumentev is interpreted as the superdiagonal. Blocks from the subdiagonal are (materialized) transpose of the corresponding superdiagonal blocks.

Examples

julia> dv = [1, 2, 3, 4]4-element Vector{Int64}: 1 2 3 4julia> ev = [7, 8, 9]3-element Vector{Int64}: 7 8 9julia> SymTridiagonal(dv, ev)4×4 SymTridiagonal{Int64, Vector{Int64}}: 1  7  ⋅  ⋅ 7  2  8  ⋅ ⋅  8  3  9 ⋅  ⋅  9  4julia> A = SymTridiagonal(fill([1 2; 3 4], 3), fill([1 2; 3 4], 2));julia> A[1,1]2×2 Symmetric{Int64, Matrix{Int64}}: 1  2 2  4julia> A[1,2]2×2 Matrix{Int64}: 1  2 3  4julia> A[2,1]2×2 Matrix{Int64}: 1  3 2  4
SymTridiagonal(A::AbstractMatrix)

Construct a symmetric tridiagonal matrix from the diagonal and first superdiagonal of the symmetric matrixA.

Examples

julia> A = [1 2 3; 2 4 5; 3 5 6]3×3 Matrix{Int64}: 1  2  3 2  4  5 3  5  6julia> SymTridiagonal(A)3×3 SymTridiagonal{Int64, Vector{Int64}}: 1  2  ⋅ 2  4  5 ⋅  5  6julia> B = reshape([[1 2; 2 3], [1 2; 3 4], [1 3; 2 4], [1 2; 2 3]], 2, 2);julia> SymTridiagonal(B)2×2 SymTridiagonal{Matrix{Int64}, Vector{Matrix{Int64}}}: [1 2; 2 3]  [1 3; 2 4] [1 2; 3 4]  [1 2; 2 3]
LinearAlgebra.TridiagonalType
Tridiagonal(dl::V, d::V, du::V) where V <: AbstractVector

Construct a tridiagonal matrix from the first subdiagonal, diagonal, and first superdiagonal, respectively. The result is of typeTridiagonal and provides efficient specialized linear solvers, but may be converted into a regular matrix withconvert(Array, _) (orArray(_) for short). The lengths ofdl anddu must be one less than the length ofd.

Note

The subdiagonaldl and the superdiagonaldu must not be aliased to each other. If aliasing is detected, the constructor will use a copy ofdu as its argument.

Examples

julia> dl = [1, 2, 3];julia> du = [4, 5, 6];julia> d = [7, 8, 9, 0];julia> Tridiagonal(dl, d, du)4×4 Tridiagonal{Int64, Vector{Int64}}: 7  4  ⋅  ⋅ 1  8  5  ⋅ ⋅  2  9  6 ⋅  ⋅  3  0
Tridiagonal(A)

Construct a tridiagonal matrix from the first sub-diagonal, diagonal and first super-diagonal of the matrixA.

Examples

julia> A = [1 2 3 4; 1 2 3 4; 1 2 3 4; 1 2 3 4]4×4 Matrix{Int64}: 1  2  3  4 1  2  3  4 1  2  3  4 1  2  3  4julia> Tridiagonal(A)4×4 Tridiagonal{Int64, Vector{Int64}}: 1  2  ⋅  ⋅ 1  2  3  ⋅ ⋅  2  3  4 ⋅  ⋅  3  4
LinearAlgebra.SymmetricType
Symmetric(A::AbstractMatrix, uplo::Symbol=:U)

Construct aSymmetric view of the upper (ifuplo = :U) or lower (ifuplo = :L) triangle of the matrixA.

Symmetric views are mainly useful for real-symmetric matrices, for which specialized algorithms (e.g. for eigenproblems) are enabled forSymmetric types. More generally, see alsoHermitian(A) for Hermitian matricesA == A', which is effectively equivalent toSymmetric for real matrices but is also useful for complex matrices. (Whereas complexSymmetric matrices are supported but have few if any specialized algorithms.)

To compute the symmetric part of a real matrix, or more generally the Hermitian part(A + A') / 2 of a real or complex matrixA, usehermitianpart.

Examples

julia> A = [1 2 3; 4 5 6; 7 8 9]3×3 Matrix{Int64}: 1  2  3 4  5  6 7  8  9julia> Supper = Symmetric(A)3×3 Symmetric{Int64, Matrix{Int64}}: 1  2  3 2  5  6 3  6  9julia> Slower = Symmetric(A, :L)3×3 Symmetric{Int64, Matrix{Int64}}: 1  4  7 4  5  8 7  8  9julia> hermitianpart(A)3×3 Hermitian{Float64, Matrix{Float64}}: 1.0  3.0  5.0 3.0  5.0  7.0 5.0  7.0  9.0

Note thatSupper will not be equal toSlower unlessA is itself symmetric (e.g. ifA == transpose(A)).

LinearAlgebra.HermitianType
Hermitian(A::AbstractMatrix, uplo::Symbol=:U)

Construct aHermitian view of the upper (ifuplo = :U) or lower (ifuplo = :L) triangle of the matrixA.

To compute the Hermitian part ofA, usehermitianpart.

Examples

julia> A = [1 2+2im 3-3im; 4 5 6-6im; 7 8+8im 9]3×3 Matrix{Complex{Int64}}: 1+0im  2+2im  3-3im 4+0im  5+0im  6-6im 7+0im  8+8im  9+0imjulia> Hupper = Hermitian(A)3×3 Hermitian{Complex{Int64}, Matrix{Complex{Int64}}}: 1+0im  2+2im  3-3im 2-2im  5+0im  6-6im 3+3im  6+6im  9+0imjulia> Hlower = Hermitian(A, :L)3×3 Hermitian{Complex{Int64}, Matrix{Complex{Int64}}}: 1+0im  4+0im  7+0im 4+0im  5+0im  8-8im 7+0im  8+8im  9+0imjulia> hermitianpart(A)3×3 Hermitian{ComplexF64, Matrix{ComplexF64}}: 1.0+0.0im  3.0+1.0im  5.0-1.5im 3.0-1.0im  5.0+0.0im  7.0-7.0im 5.0+1.5im  7.0+7.0im  9.0+0.0im

Note thatHupper will not be equal toHlower unlessA is itself Hermitian (e.g. ifA == adjoint(A)).

All non-real parts of the diagonal will be ignored.

Hermitian(fill(complex(1,1), 1, 1)) == fill(1, 1, 1)
LinearAlgebra.LowerTriangularType
LowerTriangular(A::AbstractMatrix)

Construct aLowerTriangular view of the matrixA.

Examples

julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]3×3 Matrix{Float64}: 1.0  2.0  3.0 4.0  5.0  6.0 7.0  8.0  9.0julia> LowerTriangular(A)3×3 LowerTriangular{Float64, Matrix{Float64}}: 1.0   ⋅    ⋅ 4.0  5.0   ⋅ 7.0  8.0  9.0
LinearAlgebra.UpperTriangularType
UpperTriangular(A::AbstractMatrix)

Construct anUpperTriangular view of the matrixA.

Examples

julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]3×3 Matrix{Float64}: 1.0  2.0  3.0 4.0  5.0  6.0 7.0  8.0  9.0julia> UpperTriangular(A)3×3 UpperTriangular{Float64, Matrix{Float64}}: 1.0  2.0  3.0  ⋅   5.0  6.0  ⋅    ⋅   9.0
LinearAlgebra.UnitLowerTriangularType
UnitLowerTriangular(A::AbstractMatrix)

Construct aUnitLowerTriangular view of the matrixA. Such a view has theoneunit of theeltype ofA on its diagonal.

Examples

julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]3×3 Matrix{Float64}: 1.0  2.0  3.0 4.0  5.0  6.0 7.0  8.0  9.0julia> UnitLowerTriangular(A)3×3 UnitLowerTriangular{Float64, Matrix{Float64}}: 1.0   ⋅    ⋅ 4.0  1.0   ⋅ 7.0  8.0  1.0
LinearAlgebra.UnitUpperTriangularType
UnitUpperTriangular(A::AbstractMatrix)

Construct anUnitUpperTriangular view of the matrixA. Such a view has theoneunit of theeltype ofA on its diagonal.

Examples

julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]3×3 Matrix{Float64}: 1.0  2.0  3.0 4.0  5.0  6.0 7.0  8.0  9.0julia> UnitUpperTriangular(A)3×3 UnitUpperTriangular{Float64, Matrix{Float64}}: 1.0  2.0  3.0  ⋅   1.0  6.0  ⋅    ⋅   1.0
LinearAlgebra.UpperHessenbergType
UpperHessenberg(A::AbstractMatrix)

Construct anUpperHessenberg view of the matrixA. Entries ofA below the first subdiagonal are ignored.

Julia 1.3

This type was added in Julia 1.3.

Efficient algorithms are implemented forH \ b,det(H), and similar.

See also thehessenberg function to factor any matrix into a similar upper-Hessenberg matrix.

IfF::Hessenberg is the factorization object, the unitary matrix can be accessed withF.Q and the Hessenberg matrix withF.H. WhenQ is extracted, the resulting type is theHessenbergQ object, and may be converted to a regular matrix withconvert(Array, _) (orArray(_) for short).

Iterating the decomposition produces the factorsF.Q andF.H.

Examples

julia> A = [1 2 3 4; 5 6 7 8; 9 10 11 12; 13 14 15 16]4×4 Matrix{Int64}:  1   2   3   4  5   6   7   8  9  10  11  12 13  14  15  16julia> UpperHessenberg(A)4×4 UpperHessenberg{Int64, Matrix{Int64}}: 1   2   3   4 5   6   7   8 ⋅  10  11  12 ⋅   ⋅  15  16
LinearAlgebra.UniformScalingType
UniformScaling{T<:Number}

Generically sized uniform scaling operator defined as a scalar times the identity operator,λ*I. Although without an explicitsize, it acts similarly to a matrix in many cases and includes support for some indexing. See alsoI.

Julia 1.6

Indexing using ranges is available as of Julia 1.6.

Examples

julia> J = UniformScaling(2.)UniformScaling{Float64}2.0*Ijulia> A = [1. 2.; 3. 4.]2×2 Matrix{Float64}: 1.0  2.0 3.0  4.0julia> J*A2×2 Matrix{Float64}: 2.0  4.0 6.0  8.0julia> J[1:2, 1:2]2×2 Matrix{Float64}: 2.0  0.0 0.0  2.0
LinearAlgebra.IConstant
I

An object of typeUniformScaling, representing an identity matrix of any size.

Examples

julia> fill(1, (5,6)) * I == fill(1, (5,6))truejulia> [1 2im 3; 1im 2 3] * I2×3 Matrix{Complex{Int64}}: 1+0im  0+2im  3+0im 0+1im  2+0im  3+0im
LinearAlgebra.UniformScalingMethod
(I::UniformScaling)(n::Integer)

Construct aDiagonal matrix from aUniformScaling.

Julia 1.2

This method is available as of Julia 1.2.

Examples

julia> I(3)3×3 Diagonal{Bool, Vector{Bool}}: 1  ⋅  ⋅ ⋅  1  ⋅ ⋅  ⋅  1julia> (0.7*I)(3)3×3 Diagonal{Float64, Vector{Float64}}: 0.7   ⋅    ⋅  ⋅   0.7   ⋅  ⋅    ⋅   0.7
LinearAlgebra.FactorizationType
LinearAlgebra.Factorization

Abstract type formatrix factorizations a.k.a. matrix decompositions. Seeonline documentation for a list of available matrix factorizations.

LinearAlgebra.LUType
LU <: Factorization

Matrix factorization type of theLU factorization of a square matrixA. This is the return type oflu, the corresponding matrix factorization function.

The individual components of the factorizationF::LU can be accessed viagetproperty:

ComponentDescription
F.LL (unit lower triangular) part ofLU
F.UU (upper triangular) part ofLU
F.p(right) permutationVector
F.P(right) permutationMatrix

Iterating the factorization produces the componentsF.L,F.U, andF.p.

Examples

julia> A = [4 3; 6 3]2×2 Matrix{Int64}: 4  3 6  3julia> F = lu(A)LU{Float64, Matrix{Float64}, Vector{Int64}}L factor:2×2 Matrix{Float64}: 1.0       0.0 0.666667  1.0U factor:2×2 Matrix{Float64}: 6.0  3.0 0.0  1.0julia> F.L * F.U == A[F.p, :]truejulia> l, u, p = lu(A); # destructuring via iterationjulia> l == F.L && u == F.U && p == F.ptrue
LinearAlgebra.luFunction
lu(A::AbstractSparseMatrixCSC; check = true, q = nothing, control = get_umfpack_control()) -> F::UmfpackLU

Compute the LU factorization of a sparse matrixA.

For sparseA with real or complex element type, the return type ofF isUmfpackLU{Tv, Ti}, withTv =Float64 orComplexF64 respectively andTi is an integer type (Int32 orInt64).

Whencheck = true, an error is thrown if the decomposition fails. Whencheck = false, responsibility for checking the decomposition's validity (viaissuccess) lies with the user.

The permutationq can either be a permutation vector ornothing. If no permutation vector is provided orq isnothing, UMFPACK's default is used. If the permutation is not zero-based, a zero-based copy is made.

Thecontrol vector defaults to the Julia SparseArrays package's default configuration for UMFPACK (NB: this is modified from the UMFPACK defaults to disable iterative refinement), but can be changed by passing a vector of lengthUMFPACK_CONTROL, see the UMFPACK manual for possible configurations. For example to reenable iterative refinement:

umfpack_control = SparseArrays.UMFPACK.get_umfpack_control(Float64, Int64) # read Julia default configuration for a Float64 sparse matrixSparseArrays.UMFPACK.show_umf_ctrl(umfpack_control) # optional - display valuesumfpack_control[SparseArrays.UMFPACK.JL_UMFPACK_IRSTEP] = 2.0 # reenable iterative refinement (2 is UMFPACK default max iterative refinement steps)Alu = lu(A; control = umfpack_control)x = Alu \ b   # solve Ax = b, including UMFPACK iterative refinement

The individual components of the factorizationF can be accessed by indexing:

ComponentDescription
LL (lower triangular) part ofLU
UU (upper triangular) part ofLU
pright permutationVector
qleft permutationVector
RsVector of scaling factors
:(L,U,p,q,Rs) components

The relation betweenF andA is

F.L*F.U == (F.Rs .* A)[F.p, F.q]

F further supports the following functions:

See alsolu!

Note

lu(A::AbstractSparseMatrixCSC) uses the UMFPACK[ACM832] library that is part ofSuiteSparse. As this library only supports sparse matrices withFloat64 orComplexF64 elements,lu convertsA into a copy that is of typeSparseMatrixCSC{Float64} orSparseMatrixCSC{ComplexF64} as appropriate.

lu(A, pivot = RowMaximum(); check = true, allowsingular = false) -> F::LU

Compute the LU factorization ofA.

Whencheck = true, an error is thrown if the decomposition fails. Whencheck = false, responsibility for checking the decomposition's validity (viaissuccess) lies with the user.

By default, withcheck = true, an error is also thrown when the decomposition produces valid factors, but the upper-triangular factorU is rank-deficient. This may be changed by passingallowsingular = true.

In most cases, ifA is a subtypeS ofAbstractMatrix{T} with an element typeT supporting+,-,* and/, the return type isLU{T,S{T}}.

In general, LU factorization involves a permutation of the rows of the matrix (corresponding to theF.p output described below), known as "pivoting" (because it corresponds to choosing which row contains the "pivot", the diagonal entry ofF.U). One of the following pivoting strategies can be selected via the optionalpivot argument:

  • RowMaximum() (default): the standard pivoting strategy; the pivot corresponds to the element of maximum absolute value among the remaining, to be factorized rows. This pivoting strategy requires the element type to also supportabs and<. (This is generally the only numerically stable option for floating-point matrices.)
  • RowNonZero(): the pivot corresponds to the first non-zero element among the remaining, to be factorized rows. (This corresponds to the typical choice in hand calculations, and is also useful for more general algebraic number types that supportiszero but notabs or<.)
  • NoPivot(): pivoting turned off (will fail if a zero entry is encountered in a pivot position, even whenallowsingular = true).

The individual components of the factorizationF can be accessed viagetproperty:

ComponentDescription
F.LL (lower triangular) part ofLU
F.UU (upper triangular) part ofLU
F.p(right) permutationVector
F.P(right) permutationMatrix

Iterating the factorization produces the componentsF.L,F.U, andF.p.

The relationship betweenF andA is

F.L*F.U == A[F.p, :]

F further supports the following functions:

Supported functionLULU{T,Tridiagonal{T}}
/
\
inv
det
logdet
logabsdet
size
Julia 1.11

Theallowsingular keyword argument was added in Julia 1.11.

Examples

julia> A = [4 3; 6 3]2×2 Matrix{Int64}: 4  3 6  3julia> F = lu(A)LU{Float64, Matrix{Float64}, Vector{Int64}}L factor:2×2 Matrix{Float64}: 1.0       0.0 0.666667  1.0U factor:2×2 Matrix{Float64}: 6.0  3.0 0.0  1.0julia> F.L * F.U == A[F.p, :]truejulia> l, u, p = lu(A); # destructuring via iterationjulia> l == F.L && u == F.U && p == F.ptruejulia> lu([1 2; 1 2], allowsingular = true)LU{Float64, Matrix{Float64}, Vector{Int64}}L factor:2×2 Matrix{Float64}: 1.0  0.0 1.0  1.0U factor (rank-deficient):2×2 Matrix{Float64}: 1.0  2.0 0.0  0.0
LinearAlgebra.lu!Function
lu!(F::UmfpackLU, A::AbstractSparseMatrixCSC; check=true, reuse_symbolic=true, q=nothing) -> F::UmfpackLU

Compute the LU factorization of a sparse matrixA, reusing the symbolic factorization of an already existing LU factorization stored inF. Unlessreuse_symbolic is set to false, the sparse matrixA must have an identical nonzero pattern as the matrix used to create the LU factorizationF, otherwise an error is thrown. If the size ofA andF differ, all vectors will be resized accordingly.

Whencheck = true, an error is thrown if the decomposition fails. Whencheck = false, responsibility for checking the decomposition's validity (viaissuccess) lies with the user.

The permutationq can either be a permutation vector ornothing. If no permutation vector is provided orq isnothing, UMFPACK's default is used. If the permutation is not zero based, a zero based copy is made.

See alsolu

Note

lu!(F::UmfpackLU, A::AbstractSparseMatrixCSC) uses the UMFPACK library that is part of SuiteSparse. As this library only supports sparse matrices withFloat64 orComplexF64 elements,lu! will automatically convert the types to those set by the LU factorization orSparseMatrixCSC{ComplexF64} as appropriate.

Julia 1.5

lu! forUmfpackLU requires at least Julia 1.5.

Examples

julia> A = sparse(Float64[1.0 2.0; 0.0 3.0]);julia> F = lu(A);julia> B = sparse(Float64[1.0 1.0; 0.0 1.0]);julia> lu!(F, B);julia> F \ ones(2)2-element Vector{Float64}: 0.0 1.0
lu!(A, pivot = RowMaximum(); check = true, allowsingular = false) -> LU

lu! is the same aslu, but saves space by overwriting the inputA, instead of creating a copy. AnInexactError exception is thrown if the factorization produces a number not representable by the element type ofA, e.g. for integer types.

Julia 1.11

Theallowsingular keyword argument was added in Julia 1.11.

Examples

julia> A = [4. 3.; 6. 3.]2×2 Matrix{Float64}: 4.0  3.0 6.0  3.0julia> F = lu!(A)LU{Float64, Matrix{Float64}, Vector{Int64}}L factor:2×2 Matrix{Float64}: 1.0       0.0 0.666667  1.0U factor:2×2 Matrix{Float64}: 6.0  3.0 0.0  1.0julia> iA = [4 3; 6 3]2×2 Matrix{Int64}: 4  3 6  3julia> lu!(iA)ERROR: InexactError: Int64(0.6666666666666666)Stacktrace:[...]
LinearAlgebra.CholeskyType
Cholesky <: Factorization

Matrix factorization type of the Cholesky factorization of a dense symmetric/Hermitian positive definite matrixA. This is the return type ofcholesky, the corresponding matrix factorization function.

The triangular Cholesky factor can be obtained from the factorizationF::Cholesky viaF.L andF.U, whereA ≈ F.U' * F.U ≈ F.L * F.L'.

The following functions are available forCholesky objects:size,\,inv,det,logdet andisposdef.

Iterating the decomposition produces the componentsL andU.

Examples

julia> A = [4. 12. -16.; 12. 37. -43.; -16. -43. 98.]3×3 Matrix{Float64}:   4.0   12.0  -16.0  12.0   37.0  -43.0 -16.0  -43.0   98.0julia> C = cholesky(A)Cholesky{Float64, Matrix{Float64}}U factor:3×3 UpperTriangular{Float64, Matrix{Float64}}: 2.0  6.0  -8.0  ⋅   1.0   5.0  ⋅    ⋅    3.0julia> C.U3×3 UpperTriangular{Float64, Matrix{Float64}}: 2.0  6.0  -8.0  ⋅   1.0   5.0  ⋅    ⋅    3.0julia> C.L3×3 LowerTriangular{Float64, Matrix{Float64}}:  2.0   ⋅    ⋅  6.0  1.0   ⋅ -8.0  5.0  3.0julia> C.L * C.U == Atruejulia> l, u = C; # destructuring via iterationjulia> l == C.L && u == C.Utrue
LinearAlgebra.CholeskyPivotedType
CholeskyPivoted

Matrix factorization type of the pivoted Cholesky factorization of a dense symmetric/Hermitian positive semi-definite matrixA. This is the return type ofcholesky(_, ::RowMaximum), the corresponding matrix factorization function.

The triangular Cholesky factor can be obtained from the factorizationF::CholeskyPivoted viaF.L andF.U, and the permutation viaF.p, whereA[F.p, F.p] ≈ Ur' * Ur ≈ Lr * Lr' withUr = F.U[1:F.rank, :] andLr = F.L[:, 1:F.rank], or alternativelyA ≈ Up' * Up ≈ Lp * Lp' withUp = F.U[1:F.rank, invperm(F.p)] andLp = F.L[invperm(F.p), 1:F.rank].

The following functions are available forCholeskyPivoted objects:size,\,inv,det, andrank.

Iterating the decomposition produces the componentsL andU.

Examples

julia> X = [1.0, 2.0, 3.0, 4.0];julia> A = X * X';julia> C = cholesky(A, RowMaximum(), check = false)CholeskyPivoted{Float64, Matrix{Float64}, Vector{Int64}}U factor with rank 1:4×4 UpperTriangular{Float64, Matrix{Float64}}: 4.0  2.0  3.0  1.0  ⋅   0.0  6.0  2.0  ⋅    ⋅   9.0  3.0  ⋅    ⋅    ⋅   1.0permutation:4-element Vector{Int64}: 4 2 3 1julia> C.U[1:C.rank, :]' * C.U[1:C.rank, :] ≈ A[C.p, C.p]truejulia> l, u = C; # destructuring via iterationjulia> l == C.L && u == C.Utrue
LinearAlgebra.choleskyFunction
cholesky(A, NoPivot(); check = true) -> Cholesky

Compute the Cholesky factorization of a dense symmetric positive definite matrixA and return aCholesky factorization. The matrixA can either be aSymmetric orHermitianAbstractMatrix or aperfectly symmetric or HermitianAbstractMatrix.

The triangular Cholesky factor can be obtained from the factorizationF viaF.L andF.U, whereA ≈ F.U' * F.U ≈ F.L * F.L'.

The following functions are available forCholesky objects:size,\,inv,det,logdet andisposdef.

If you have a matrixA that is slightly non-Hermitian due to roundoff errors in its construction, wrap it inHermitian(A) before passing it tocholesky in order to treat it as perfectly Hermitian.

Whencheck = true, an error is thrown if the decomposition fails. Whencheck = false, responsibility for checking the decomposition's validity (viaissuccess) lies with the user.

Examples

julia> A = [4. 12. -16.; 12. 37. -43.; -16. -43. 98.]3×3 Matrix{Float64}:   4.0   12.0  -16.0  12.0   37.0  -43.0 -16.0  -43.0   98.0julia> C = cholesky(A)Cholesky{Float64, Matrix{Float64}}U factor:3×3 UpperTriangular{Float64, Matrix{Float64}}: 2.0  6.0  -8.0  ⋅   1.0   5.0  ⋅    ⋅    3.0julia> C.U3×3 UpperTriangular{Float64, Matrix{Float64}}: 2.0  6.0  -8.0  ⋅   1.0   5.0  ⋅    ⋅    3.0julia> C.L3×3 LowerTriangular{Float64, Matrix{Float64}}:  2.0   ⋅    ⋅  6.0  1.0   ⋅ -8.0  5.0  3.0julia> C.L * C.U == Atrue
cholesky(A, RowMaximum(); tol = 0.0, check = true) -> CholeskyPivoted

Compute the pivoted Cholesky factorization of a dense symmetric positive semi-definite matrixA and return aCholeskyPivoted factorization. The matrixA can either be aSymmetric orHermitianAbstractMatrix or aperfectly symmetric or HermitianAbstractMatrix.

The triangular Cholesky factor can be obtained from the factorizationF viaF.L andF.U, and the permutation viaF.p, whereA[F.p, F.p] ≈ Ur' * Ur ≈ Lr * Lr' withUr = F.U[1:F.rank, :] andLr = F.L[:, 1:F.rank], or alternativelyA ≈ Up' * Up ≈ Lp * Lp' withUp = F.U[1:F.rank, invperm(F.p)] andLp = F.L[invperm(F.p), 1:F.rank].

The following functions are available forCholeskyPivoted objects:size,\,inv,det, andrank.

The argumenttol determines the tolerance for determining the rank. For negative values, the tolerance is the machine precision.

If you have a matrixA that is slightly non-Hermitian due to roundoff errors in its construction, wrap it inHermitian(A) before passing it tocholesky in order to treat it as perfectly Hermitian.

Whencheck = true, an error is thrown if the decomposition fails. Whencheck = false, responsibility for checking the decomposition's validity (viaissuccess) lies with the user.

Examples

julia> X = [1.0, 2.0, 3.0, 4.0];julia> A = X * X';julia> C = cholesky(A, RowMaximum(), check = false)CholeskyPivoted{Float64, Matrix{Float64}, Vector{Int64}}U factor with rank 1:4×4 UpperTriangular{Float64, Matrix{Float64}}: 4.0  2.0  3.0  1.0  ⋅   0.0  6.0  2.0  ⋅    ⋅   9.0  3.0  ⋅    ⋅    ⋅   1.0permutation:4-element Vector{Int64}: 4 2 3 1julia> C.U[1:C.rank, :]' * C.U[1:C.rank, :] ≈ A[C.p, C.p]truejulia> l, u = C; # destructuring via iterationjulia> l == C.L && u == C.Utrue
cholesky(A::SparseMatrixCSC; shift = 0.0, check = true, perm = nothing) -> CHOLMOD.Factor

Compute the Cholesky factorization of a sparse positive definite matrixA.A must be aSparseMatrixCSC or aSymmetric/Hermitian view of aSparseMatrixCSC. Note that even ifA doesn't have the type tag, it must still be symmetric or Hermitian. Ifperm is not given, a fill-reducing permutation is used.F = cholesky(A) is most frequently used to solve systems of equations withF\b, but also the methodsdiag,det, andlogdet are defined forF. You can also extract individual factors fromF, usingF.L. However, since pivoting is on by default, the factorization is internally represented asA == P'*L*L'*P with a permutation matrixP; using justL without accounting forP will give incorrect answers. To include the effects of permutation, it's typically preferable to extract "combined" factors likePtL = F.PtL (the equivalent ofP'*L) andLtP = F.UP (the equivalent ofL'*P).

Whencheck = true, an error is thrown if the decomposition fails. Whencheck = false, responsibility for checking the decomposition's validity (viaissuccess) lies with the user.

Setting the optionalshift keyword argument computes the factorization ofA+shift*I instead ofA. If theperm argument is provided, it should be a permutation of1:size(A,1) giving the ordering to use (instead of CHOLMOD's default AMD ordering).

Examples

In the following example, the fill-reducing permutation used is[3, 2, 1]. Ifperm is set to1:3 to enforce no permutation, the number of nonzero elements in the factor is 6.

julia> A = [2 1 1; 1 2 0; 1 0 2]3×3 Matrix{Int64}: 2  1  1 1  2  0 1  0  2julia> C = cholesky(sparse(A))SparseArrays.CHOLMOD.Factor{Float64, Int64}type:    LLtmethod:  simplicialmaxnnz:  5nnz:     5success: truejulia> C.p3-element Vector{Int64}: 3 2 1julia> L = sparse(C.L);julia> Matrix(L)3×3 Matrix{Float64}: 1.41421   0.0       0.0 0.0       1.41421   0.0 0.707107  0.707107  1.0julia> L * L' ≈ A[C.p, C.p]truejulia> P = sparse(1:3, C.p, ones(3))3×3 SparseMatrixCSC{Float64, Int64} with 3 stored entries:  ⋅    ⋅   1.0  ⋅   1.0   ⋅ 1.0   ⋅    ⋅julia> P' * L * L' * P ≈ Atruejulia> C = cholesky(sparse(A), perm=1:3)SparseArrays.CHOLMOD.Factor{Float64, Int64}type:    LLtmethod:  simplicialmaxnnz:  6nnz:     6success: truejulia> L = sparse(C.L);julia> Matrix(L)3×3 Matrix{Float64}: 1.41421    0.0       0.0 0.707107   1.22474   0.0 0.707107  -0.408248  1.1547julia> L * L' ≈ Atrue
Note

This method uses the CHOLMOD[ACM887][DavisHager2009] library fromSuiteSparse. CHOLMOD only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.

Many other functions from CHOLMOD are wrapped but not exported from theBase.SparseArrays.CHOLMOD module.

LinearAlgebra.cholesky!Function
cholesky!(A::AbstractMatrix, NoPivot(); check = true) -> Cholesky

The same ascholesky, but saves space by overwriting the inputA, instead of creating a copy. AnInexactError exception is thrown if the factorization produces a number not representable by the element type ofA, e.g. for integer types.

Examples

julia> A = [1 2; 2 50]2×2 Matrix{Int64}: 1   2 2  50julia> cholesky!(A)ERROR: InexactError: Int64(6.782329983125268)Stacktrace:[...]
cholesky!(A::AbstractMatrix, RowMaximum(); tol = 0.0, check = true) -> CholeskyPivoted

The same ascholesky, but saves space by overwriting the inputA, instead of creating a copy. AnInexactError exception is thrown if the factorization produces a number not representable by the element type ofA, e.g. for integer types.

cholesky!(F::CHOLMOD.Factor, A::SparseMatrixCSC; shift = 0.0, check = true) -> CHOLMOD.Factor

Compute the Cholesky ($LL'$) factorization ofA, reusing the symbolic factorizationF.A must be aSparseMatrixCSC or aSymmetric/Hermitian view of aSparseMatrixCSC. Note that even ifA doesn't have the type tag, it must still be symmetric or Hermitian.

See alsocholesky.

Note

This method uses the CHOLMOD library from SuiteSparse, which only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.

LinearAlgebra.lowrankupdateFunction
lowrankupdate(C::Cholesky, v::AbstractVector) -> CC::Cholesky

Update a Cholesky factorizationC with the vectorv. IfA = C.U'C.U thenCC = cholesky(C.U'C.U + v*v') but the computation ofCC only usesO(n^2) operations.

lowrankupdate(F::CHOLMOD.Factor, C::AbstractArray) -> FF::CHOLMOD.Factor

Get anLDLt Factorization ofA + C*C' given anLDLt orLLt factorizationF ofA.

The returned factor is always anLDLt factorization.

See alsolowrankupdate!,lowrankdowndate,lowrankdowndate!.

LinearAlgebra.lowrankdowndateFunction
lowrankdowndate(C::Cholesky, v::AbstractVector) -> CC::Cholesky

Downdate a Cholesky factorizationC with the vectorv. IfA = C.U'C.U thenCC = cholesky(C.U'C.U - v*v') but the computation ofCC only usesO(n^2) operations.

lowrankdowndate(F::CHOLMOD.Factor, C::AbstractArray) -> FF::CHOLMOD.Factor

Get anLDLt Factorization ofA + C*C' given anLDLt orLLt factorizationF ofA.

The returned factor is always anLDLt factorization.

See alsolowrankdowndate!,lowrankupdate,lowrankupdate!.

LinearAlgebra.lowrankupdate!Function
lowrankupdate!(C::Cholesky, v::AbstractVector) -> CC::Cholesky

Update a Cholesky factorizationC with the vectorv. IfA = C.U'C.U thenCC = cholesky(C.U'C.U + v*v') but the computation ofCC only usesO(n^2) operations. The input factorizationC is updated in place such that on exitC == CC. The vectorv is destroyed during the computation.

lowrankupdate!(F::CHOLMOD.Factor, C::AbstractArray)

Update anLDLt orLLt FactorizationF ofA to a factorization ofA + C*C'.

LLt factorizations are converted toLDLt.

See alsolowrankupdate,lowrankdowndate,lowrankdowndate!.

LinearAlgebra.lowrankdowndate!Function
lowrankdowndate!(C::Cholesky, v::AbstractVector) -> CC::Cholesky

Downdate a Cholesky factorizationC with the vectorv. IfA = C.U'C.U thenCC = cholesky(C.U'C.U - v*v') but the computation ofCC only usesO(n^2) operations. The input factorizationC is updated in place such that on exitC == CC. The vectorv is destroyed during the computation.

lowrankdowndate!(F::CHOLMOD.Factor, C::AbstractArray)

Update anLDLt orLLt FactorizationF ofA to a factorization ofA - C*C'.

LLt factorizations are converted toLDLt.

See alsolowrankdowndate,lowrankupdate,lowrankupdate!.

LinearAlgebra.LDLtType
LDLt <: Factorization

Matrix factorization type of theLDLt factorization of a realSymTridiagonal matrixS such thatS = L*Diagonal(d)*L', whereL is aUnitLowerTriangular matrix andd is a vector. The main use of anLDLt factorizationF = ldlt(S) is to solve the linear system of equationsSx = b withF\b. This is the return type ofldlt, the corresponding matrix factorization function.

The individual components of the factorizationF::LDLt can be accessed viagetproperty:

ComponentDescription
F.LL (unit lower triangular) part ofLDLt
F.DD (diagonal) part ofLDLt
F.LtLt (unit upper triangular) part ofLDLt
F.ddiagonal values ofD as aVector

Examples

julia> S = SymTridiagonal([3., 4., 5.], [1., 2.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 3.0  1.0   ⋅ 1.0  4.0  2.0  ⋅   2.0  5.0julia> F = ldlt(S)LDLt{Float64, SymTridiagonal{Float64, Vector{Float64}}}L factor:3×3 UnitLowerTriangular{Float64, SymTridiagonal{Float64, Vector{Float64}}}: 1.0        ⋅         ⋅ 0.333333  1.0        ⋅ 0.0       0.545455  1.0D factor:3×3 Diagonal{Float64, Vector{Float64}}: 3.0   ⋅        ⋅  ⋅   3.66667   ⋅  ⋅    ⋅       3.90909
LinearAlgebra.ldltFunction
ldlt(S::SymTridiagonal) -> LDLt

Compute anLDLt (i.e.,$LDL^T$) factorization of the real symmetric tridiagonal matrixS such thatS = L*Diagonal(d)*L' whereL is a unit lower triangular matrix andd is a vector. The main use of anLDLt factorizationF = ldlt(S) is to solve the linear system of equationsSx = b withF\b.

See alsobunchkaufman for a similar, but pivoted, factorization of arbitrary symmetric or Hermitian matrices.

Examples

julia> S = SymTridiagonal([3., 4., 5.], [1., 2.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 3.0  1.0   ⋅ 1.0  4.0  2.0  ⋅   2.0  5.0julia> ldltS = ldlt(S);julia> b = [6., 7., 8.];julia> ldltS \ b3-element Vector{Float64}: 1.7906976744186047 0.627906976744186 1.3488372093023255julia> S \ b3-element Vector{Float64}: 1.7906976744186047 0.627906976744186 1.3488372093023255
ldlt(A::SparseMatrixCSC; shift = 0.0, check = true, perm=nothing) -> CHOLMOD.Factor

Compute the$LDL'$ factorization of a sparse matrixA.A must be aSparseMatrixCSC or aSymmetric/Hermitian view of aSparseMatrixCSC. Note that even ifA doesn't have the type tag, it must still be symmetric or Hermitian. A fill-reducing permutation is used.F = ldlt(A) is most frequently used to solve systems of equationsA*x = b withF\b. The returned factorization objectF also supports the methodsdiag,det,logdet, andinv. You can extract individual factors fromF usingF.L. However, since pivoting is on by default, the factorization is internally represented asA == P'*L*D*L'*P with a permutation matrixP; using justL without accounting forP will give incorrect answers. To include the effects of permutation, it is typically preferable to extract "combined" factors likePtL = F.PtL (the equivalent ofP'*L) andLtP = F.UP (the equivalent ofL'*P). The complete list of supported factors is:L, :PtL, :D, :UP, :U, :LD, :DU, :PtLD, :DUP.

Whencheck = true, an error is thrown if the decomposition fails. Whencheck = false, responsibility for checking the decomposition's validity (viaissuccess) lies with the user.

Setting the optionalshift keyword argument computes the factorization ofA+shift*I instead ofA. If theperm argument is provided, it should be a permutation of1:size(A,1) giving the ordering to use (instead of CHOLMOD's default AMD ordering).

Note

This method uses the CHOLMOD[ACM887][DavisHager2009] library fromSuiteSparse. CHOLMOD only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.

Many other functions from CHOLMOD are wrapped but not exported from theBase.SparseArrays.CHOLMOD module.

LinearAlgebra.ldlt!Function
ldlt!(S::SymTridiagonal) -> LDLt

Same asldlt, but saves space by overwriting the inputS, instead of creating a copy.

Examples

julia> S = SymTridiagonal([3., 4., 5.], [1., 2.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 3.0  1.0   ⋅ 1.0  4.0  2.0  ⋅   2.0  5.0julia> ldltS = ldlt!(S);julia> ldltS === Sfalsejulia> S3×3 SymTridiagonal{Float64, Vector{Float64}}: 3.0       0.333333   ⋅ 0.333333  3.66667   0.545455  ⋅        0.545455  3.90909
ldlt!(F::CHOLMOD.Factor, A::SparseMatrixCSC; shift = 0.0, check = true) -> CHOLMOD.Factor

Compute the$LDL'$ factorization ofA, reusing the symbolic factorizationF.A must be aSparseMatrixCSC or aSymmetric/Hermitian view of aSparseMatrixCSC. Note that even ifA doesn't have the type tag, it must still be symmetric or Hermitian.

See alsoldlt.

Note

This method uses the CHOLMOD library fromSuiteSparse, which only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.

LinearAlgebra.QRType
QR <: Factorization

A QR matrix factorization stored in a packed format, typically obtained fromqr. If$A$ is anm×n matrix, then

\[A = Q R\]

where$Q$ is an orthogonal/unitary matrix and$R$ is upper triangular. The matrix$Q$ is stored as a sequence of Householder reflectors$v_i$ and coefficients$\tau_i$ where:

\[Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T).\]

Iterating the decomposition produces the componentsQ andR.

The object has two fields:

  • factors is anm×n matrix.

    • The upper triangular part contains the elements of$R$, that isR = triu(F.factors) for aQR objectF.

    • The subdiagonal part contains the reflectors$v_i$ stored in a packed format where$v_i$ is the$i$th column of the matrixV = I + tril(F.factors, -1).

  • τ is a vector of lengthmin(m,n) containing the coefficients$au_i$.

LinearAlgebra.QRCompactWYType
QRCompactWY <: Factorization

A QR matrix factorization stored in a compact blocked format, typically obtained fromqr. If$A$ is anm×n matrix, then

\[A = Q R\]

where$Q$ is an orthogonal/unitary matrix and$R$ is upper triangular. It is similar to theQR format except that the orthogonal/unitary matrix$Q$ is stored inCompact WY format[Schreiber1989]. For the block size$n_b$, it is stored as am×n lower trapezoidal matrix$V$ and a matrix$T = (T_1 \; T_2 \; ... \; T_{b-1} \; T_b')$ composed of$b = \lceil \min(m,n) / n_b \rceil$ upper triangular matrices$T_j$ of size$n_b$×$n_b$ ($j = 1, ..., b-1$) and an upper trapezoidal$n_b$×$\min(m,n) - (b-1) n_b$ matrix$T_b'$ ($j=b$) whose upper square part denoted with$T_b$ satisfying

\[Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T)= \prod_{j=1}^{b} (I - V_j T_j V_j^T)\]

such that$v_i$ is the$i$th column of$V$,$\tau_i$ is the$i$th element of[diag(T_1); diag(T_2); …; diag(T_b)], and$(V_1 \; V_2 \; ... \; V_b)$ is the leftm×min(m, n) block of$V$. When constructed usingqr, the block size is given by$n_b = \min(m, n, 36)$.

Iterating the decomposition produces the componentsQ andR.

The object has two fields:

  • factors, as in theQR type, is anm×n matrix.

    • The upper triangular part contains the elements of$R$, that isR = triu(F.factors) for aQR objectF.

    • The subdiagonal part contains the reflectors$v_i$ stored in a packed format such thatV = I + tril(F.factors, -1).

  • T is a$n_b$-by-$\min(m,n)$ matrix as described above. The subdiagonal elements for each triangular matrix$T_j$ are ignored.

Note

This format should not to be confused with the olderWY representation[Bischof1987].

LinearAlgebra.QRPivotedType
QRPivoted <: Factorization

A QR matrix factorization with column pivoting in a packed format, typically obtained fromqr. If$A$ is anm×n matrix, then

\[A P = Q R\]

where$P$ is a permutation matrix,$Q$ is an orthogonal/unitary matrix and$R$ is upper triangular. The matrix$Q$ is stored as a sequence of Householder reflectors:

\[Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T).\]

Iterating the decomposition produces the componentsQ,R, andp.

The object has three fields:

  • factors is anm×n matrix.

    • The upper triangular part contains the elements of$R$, that isR = triu(F.factors) for aQR objectF.

    • The subdiagonal part contains the reflectors$v_i$ stored in a packed format where$v_i$ is the$i$th column of the matrixV = I + tril(F.factors, -1).

  • τ is a vector of lengthmin(m,n) containing the coefficients$au_i$.

  • jpvt is an integer vector of lengthn corresponding to the permutation$P$.

LinearAlgebra.qrFunction
qr(A::SparseMatrixCSC; tol=_default_tol(A), ordering=ORDERING_DEFAULT) -> QRSparse

Compute theQR factorization of a sparse matrixA. Fill-reducing row and column permutations are used such thatF.R = F.Q'*A[F.prow,F.pcol]. The main application of this type is to solve least squares or underdetermined problems with\. The function calls the C library SPQR[ACM933].

Note

qr(A::SparseMatrixCSC) uses the SPQR library that is part ofSuiteSparse. As this library only supports sparse matrices withFloat64 orComplexF64 elements, as of Julia v1.4qr convertsA into a copy that is of typeSparseMatrixCSC{Float64} orSparseMatrixCSC{ComplexF64} as appropriate.

Examples

julia> A = sparse([1,2,3,4], [1,1,2,2], [1.0,1.0,1.0,1.0])4×2 SparseMatrixCSC{Float64, Int64} with 4 stored entries: 1.0   ⋅ 1.0   ⋅  ⋅   1.0  ⋅   1.0julia> qr(A)SparseArrays.SPQR.QRSparse{Float64, Int64}Q factor:4×4 SparseArrays.SPQR.QRSparseQ{Float64, Int64}R factor:2×2 SparseMatrixCSC{Float64, Int64} with 2 stored entries: -1.41421    ⋅   ⋅       -1.41421Row permutation:4-element Vector{Int64}: 1 3 4 2Column permutation:2-element Vector{Int64}: 1 2
qr(A, pivot = NoPivot(); blocksize) -> F

Compute the QR factorization of the matrixA: an orthogonal (or unitary ifA is complex-valued) matrixQ, and an upper triangular matrixR such that

\[A = Q R\]

The returned objectF stores the factorization in a packed format:

  • ifpivot == ColumnNorm() thenF is aQRPivoted object,

  • otherwise if the element type ofA is a BLAS type (Float32,Float64,ComplexF32 orComplexF64), thenF is aQRCompactWY object,

  • otherwiseF is aQR object.

The individual components of the decompositionF can be retrieved via property accessors:

  • F.Q: the orthogonal/unitary matrixQ
  • F.R: the upper triangular matrixR
  • F.p: the permutation vector of the pivot (QRPivoted only)
  • F.P: the permutation matrix of the pivot (QRPivoted only)
Note

Each reference to the upper triangular factor viaF.R allocates a new array. It is therefore advisable to cache that array, say, byR = F.R and continue working withR.

Iterating the decomposition produces the componentsQ,R, and if extantp.

The following functions are available for theQR objects:inv,size, and\. WhenA is rectangular,\ will return a least squares solution and if the solution is not unique, the one with smallest norm is returned. WhenA is not full rank, factorization with (column) pivoting is required to obtain a minimum norm solution.

Multiplication with respect to either full/square or non-full/squareQ is allowed, i.e. bothF.Q*F.R andF.Q*A are supported. AQ matrix can be converted into a regular matrix withMatrix. This operation returns the "thin" Q factor, i.e., ifA ism×n withm>=n, thenMatrix(F.Q) yields anm×n matrix with orthonormal columns. To retrieve the "full" Q factor, anm×m orthogonal matrix, useF.Q*I orcollect(F.Q). Ifm<=n, thenMatrix(F.Q) yields anm×m orthogonal matrix.

The block size for QR decomposition can be specified by keyword argumentblocksize :: Integer whenpivot == NoPivot() andA isa StridedMatrix{<:BlasFloat}. It is ignored whenblocksize > minimum(size(A)). SeeQRCompactWY.

Julia 1.4

Theblocksize keyword argument requires Julia 1.4 or later.

Examples

julia> A = [3.0 -6.0; 4.0 -8.0; 0.0 1.0]3×2 Matrix{Float64}: 3.0  -6.0 4.0  -8.0 0.0   1.0julia> F = qr(A)LinearAlgebra.QRCompactWY{Float64, Matrix{Float64}, Matrix{Float64}}Q factor: 3×3 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}R factor:2×2 Matrix{Float64}: -5.0  10.0  0.0  -1.0julia> F.Q * F.R == Atrue
Note

qr returns multiple types because LAPACK uses several representations that minimize the memory storage requirements of products of Householder elementary reflectors, so that theQ andR matrices can be stored compactly rather as two separate dense matrices.

LinearAlgebra.qr!Function
qr!(A, pivot = NoPivot(); blocksize)

qr! is the same asqr whenA is a subtype ofAbstractMatrix, but saves space by overwriting the inputA, instead of creating a copy. AnInexactError exception is thrown if the factorization produces a number not representable by the element type ofA, e.g. for integer types.

Julia 1.4

Theblocksize keyword argument requires Julia 1.4 or later.

Examples

julia> a = [1. 2.; 3. 4.]2×2 Matrix{Float64}: 1.0  2.0 3.0  4.0julia> qr!(a)LinearAlgebra.QRCompactWY{Float64, Matrix{Float64}, Matrix{Float64}}Q factor: 2×2 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}R factor:2×2 Matrix{Float64}: -3.16228  -4.42719  0.0      -0.632456julia> a = [1 2; 3 4]2×2 Matrix{Int64}: 1  2 3  4julia> qr!(a)ERROR: InexactError: Int64(3.1622776601683795)Stacktrace:[...]
LinearAlgebra.LQType
LQ <: Factorization

Matrix factorization type of theLQ factorization of a matrixA. TheLQ decomposition is theQR decomposition oftranspose(A). This is the return type oflq, the corresponding matrix factorization function.

IfS::LQ is the factorization object, the lower triangular component can be obtained viaS.L, and the orthogonal/unitary component viaS.Q, such thatA ≈ S.L*S.Q.

Iterating the decomposition produces the componentsS.L andS.Q.

Examples

julia> A = [5. 7.; -2. -4.]2×2 Matrix{Float64}:  5.0   7.0 -2.0  -4.0julia> S = lq(A)LQ{Float64, Matrix{Float64}, Vector{Float64}}L factor:2×2 Matrix{Float64}: -8.60233   0.0  4.41741  -0.697486Q factor: 2×2 LinearAlgebra.LQPackedQ{Float64, Matrix{Float64}, Vector{Float64}}julia> S.L * S.Q2×2 Matrix{Float64}:  5.0   7.0 -2.0  -4.0julia> l, q = S; # destructuring via iterationjulia> l == S.L &&  q == S.Qtrue
LinearAlgebra.lqFunction
lq(A) -> S::LQ

Compute the LQ decomposition ofA. The decomposition's lower triangular component can be obtained from theLQ objectS viaS.L, and the orthogonal/unitary component viaS.Q, such thatA ≈ S.L*S.Q.

Iterating the decomposition produces the componentsS.L andS.Q.

The LQ decomposition is the QR decomposition oftranspose(A), and it is useful in order to compute the minimum-norm solutionlq(A) \ b to an underdetermined system of equations (A has more columns than rows, but has full row rank).

Examples

julia> A = [5. 7.; -2. -4.]2×2 Matrix{Float64}:  5.0   7.0 -2.0  -4.0julia> S = lq(A)LQ{Float64, Matrix{Float64}, Vector{Float64}}L factor:2×2 Matrix{Float64}: -8.60233   0.0  4.41741  -0.697486Q factor: 2×2 LinearAlgebra.LQPackedQ{Float64, Matrix{Float64}, Vector{Float64}}julia> S.L * S.Q2×2 Matrix{Float64}:  5.0   7.0 -2.0  -4.0julia> l, q = S; # destructuring via iterationjulia> l == S.L &&  q == S.Qtrue
LinearAlgebra.lq!Function
lq!(A) -> LQ

Compute theLQ factorization ofA, using the input matrix as a workspace. See alsolq.

LinearAlgebra.BunchKaufmanType
BunchKaufman <: Factorization

Matrix factorization type of the Bunch-Kaufman factorization of a symmetric or Hermitian matrixA asP'UDU'P orP'LDL'P, depending on whether the upper (the default) or the lower triangle is stored inA. IfA is complex symmetric thenU' andL' denote the unconjugated transposes, i.e.transpose(U) andtranspose(L), respectively. This is the return type ofbunchkaufman, the corresponding matrix factorization function.

IfS::BunchKaufman is the factorization object, the components can be obtained viaS.D,S.U orS.L as appropriate givenS.uplo, andS.p.

Iterating the decomposition produces the componentsS.D,S.U orS.L as appropriate givenS.uplo, andS.p.

Examples

julia> A = Float64.([1 2; 2 3])2×2 Matrix{Float64}: 1.0  2.0 2.0  3.0julia> S = bunchkaufman(A) # A gets wrapped internally by Symmetric(A)BunchKaufman{Float64, Matrix{Float64}, Vector{Int64}}D factor:2×2 Tridiagonal{Float64, Vector{Float64}}: -0.333333  0.0  0.0       3.0U factor:2×2 UnitUpperTriangular{Float64, Matrix{Float64}}: 1.0  0.666667  ⋅   1.0permutation:2-element Vector{Int64}: 1 2julia> d, u, p = S; # destructuring via iterationjulia> d == S.D && u == S.U && p == S.ptruejulia> S = bunchkaufman(Symmetric(A, :L))BunchKaufman{Float64, Matrix{Float64}, Vector{Int64}}D factor:2×2 Tridiagonal{Float64, Vector{Float64}}: 3.0   0.0 0.0  -0.333333L factor:2×2 UnitLowerTriangular{Float64, Matrix{Float64}}: 1.0        ⋅ 0.666667  1.0permutation:2-element Vector{Int64}: 2 1
LinearAlgebra.bunchkaufmanFunction
bunchkaufman(A, rook::Bool=false; check = true) -> S::BunchKaufman

Compute the Bunch-Kaufman[Bunch1977] factorization of a symmetric or Hermitian matrixA asP'*U*D*U'*P orP'*L*D*L'*P, depending on which triangle is stored inA, and return aBunchKaufman object. Note that ifA is complex symmetric thenU' andL' denote the unconjugated transposes, i.e.transpose(U) andtranspose(L).

Iterating the decomposition produces the componentsS.D,S.U orS.L as appropriate givenS.uplo, andS.p.

Ifrook istrue, rook pivoting is used. Ifrook is false, rook pivoting is not used.

Whencheck = true, an error is thrown if the decomposition fails. Whencheck = false, responsibility for checking the decomposition's validity (viaissuccess) lies with the user.

The following functions are available forBunchKaufman objects:size,\,inv,issymmetric,ishermitian,getindex.

Examples

julia> A = Float64.([1 2; 2 3])2×2 Matrix{Float64}: 1.0  2.0 2.0  3.0julia> S = bunchkaufman(A) # A gets wrapped internally by Symmetric(A)BunchKaufman{Float64, Matrix{Float64}, Vector{Int64}}D factor:2×2 Tridiagonal{Float64, Vector{Float64}}: -0.333333  0.0  0.0       3.0U factor:2×2 UnitUpperTriangular{Float64, Matrix{Float64}}: 1.0  0.666667  ⋅   1.0permutation:2-element Vector{Int64}: 1 2julia> d, u, p = S; # destructuring via iterationjulia> d == S.D && u == S.U && p == S.ptruejulia> S.U*S.D*S.U' - S.P*A*S.P'2×2 Matrix{Float64}: 0.0  0.0 0.0  0.0julia> S = bunchkaufman(Symmetric(A, :L))BunchKaufman{Float64, Matrix{Float64}, Vector{Int64}}D factor:2×2 Tridiagonal{Float64, Vector{Float64}}: 3.0   0.0 0.0  -0.333333L factor:2×2 UnitLowerTriangular{Float64, Matrix{Float64}}: 1.0        ⋅ 0.666667  1.0permutation:2-element Vector{Int64}: 2 1julia> S.L*S.D*S.L' - A[S.p, S.p]2×2 Matrix{Float64}: 0.0  0.0 0.0  0.0
LinearAlgebra.bunchkaufman!Function
bunchkaufman!(A, rook::Bool=false; check = true) -> BunchKaufman

bunchkaufman! is the same asbunchkaufman, but saves space by overwriting the inputA, instead of creating a copy.

LinearAlgebra.EigenType
Eigen <: Factorization

Matrix factorization type of the eigenvalue/spectral decomposition of a square matrixA. This is the return type ofeigen, the corresponding matrix factorization function.

IfF::Eigen is the factorization object, the eigenvalues can be obtained viaF.values and the eigenvectors as the columns of the matrixF.vectors. (Thekth eigenvector can be obtained from the sliceF.vectors[:, k].)

Iterating the decomposition produces the componentsF.values andF.vectors.

Examples

julia> F = eigen([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}values:3-element Vector{Float64}:  1.0  3.0 18.0vectors:3×3 Matrix{Float64}: 1.0  0.0  0.0 0.0  1.0  0.0 0.0  0.0  1.0julia> F.values3-element Vector{Float64}:  1.0  3.0 18.0julia> F.vectors3×3 Matrix{Float64}: 1.0  0.0  0.0 0.0  1.0  0.0 0.0  0.0  1.0julia> vals, vecs = F; # destructuring via iterationjulia> vals == F.values && vecs == F.vectorstrue
LinearAlgebra.GeneralizedEigenType
GeneralizedEigen <: Factorization

Matrix factorization type of the generalized eigenvalue/spectral decomposition ofA andB. This is the return type ofeigen, the corresponding matrix factorization function, when called with two matrix arguments.

IfF::GeneralizedEigen is the factorization object, the eigenvalues can be obtained viaF.values and the eigenvectors as the columns of the matrixF.vectors. (Thekth eigenvector can be obtained from the sliceF.vectors[:, k].)

Iterating the decomposition produces the componentsF.values andF.vectors.

Examples

julia> A = [1 0; 0 -1]2×2 Matrix{Int64}: 1   0 0  -1julia> B = [0 1; 1 0]2×2 Matrix{Int64}: 0  1 1  0julia> F = eigen(A, B)GeneralizedEigen{ComplexF64, ComplexF64, Matrix{ComplexF64}, Vector{ComplexF64}}values:2-element Vector{ComplexF64}: 0.0 - 1.0im 0.0 + 1.0imvectors:2×2 Matrix{ComplexF64}:  0.0+1.0im   0.0-1.0im -1.0+0.0im  -1.0-0.0imjulia> F.values2-element Vector{ComplexF64}: 0.0 - 1.0im 0.0 + 1.0imjulia> F.vectors2×2 Matrix{ComplexF64}:  0.0+1.0im   0.0-1.0im -1.0+0.0im  -1.0-0.0imjulia> vals, vecs = F; # destructuring via iterationjulia> vals == F.values && vecs == F.vectorstrue
LinearAlgebra.eigvalsFunction
eigvals(A; permute::Bool=true, scale::Bool=true, sortby) -> values

Return the eigenvalues ofA.

For general non-symmetric matrices it is possible to specify how the matrix is balanced before the eigenvalue calculation. Thepermute,scale, andsortby keywords are the same as foreigen.

Examples

julia> diag_matrix = [1 0; 0 4]2×2 Matrix{Int64}: 1  0 0  4julia> eigvals(diag_matrix)2-element Vector{Float64}: 1.0 4.0

For a scalar input,eigvals will return a scalar.

Examples

julia> eigvals(-2)-2
eigvals(A, B) -> values

Compute the generalized eigenvalues ofA andB.

Examples

julia> A = [1 0; 0 -1]2×2 Matrix{Int64}: 1   0 0  -1julia> B = [0 1; 1 0]2×2 Matrix{Int64}: 0  1 1  0julia> eigvals(A,B)2-element Vector{ComplexF64}: 0.0 - 1.0im 0.0 + 1.0im
eigvals(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> values

Return the eigenvalues ofA. It is possible to calculate only a subset of the eigenvalues by specifying aUnitRangeirange covering indices of the sorted eigenvalues, e.g. the 2nd to 8th eigenvalues.

Examples

julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 1.0  2.0   ⋅ 2.0  2.0  3.0  ⋅   3.0  1.0julia> eigvals(A, 2:2)1-element Vector{Float64}: 0.9999999999999996julia> eigvals(A)3-element Vector{Float64}: -2.1400549446402604  1.0000000000000002  5.140054944640259
eigvals(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> values

Return the eigenvalues ofA. It is possible to calculate only a subset of the eigenvalues by specifying a pairvl andvu for the lower and upper boundaries of the eigenvalues.

Examples

julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 1.0  2.0   ⋅ 2.0  2.0  3.0  ⋅   3.0  1.0julia> eigvals(A, -1, 2)1-element Vector{Float64}: 1.0000000000000009julia> eigvals(A)3-element Vector{Float64}: -2.1400549446402604  1.0000000000000002  5.140054944640259
LinearAlgebra.eigvals!Function
eigvals!(A; permute::Bool=true, scale::Bool=true, sortby) -> values

Same aseigvals, but saves space by overwriting the inputA, instead of creating a copy. Thepermute,scale, andsortby keywords are the same as foreigen.

Note

The input matrixA will not contain its eigenvalues aftereigvals! is called on it -A is used as a workspace.

Examples

julia> A = [1. 2.; 3. 4.]2×2 Matrix{Float64}: 1.0  2.0 3.0  4.0julia> eigvals!(A)2-element Vector{Float64}: -0.3722813232690143  5.372281323269014julia> A2×2 Matrix{Float64}: -0.372281  -1.0  0.0        5.37228
eigvals!(A, B; sortby) -> values

Same aseigvals, but saves space by overwriting the inputA (andB), instead of creating copies.

Note

The input matricesA andB will not contain their eigenvalues aftereigvals! is called. They are used as workspaces.

Examples

julia> A = [1. 0.; 0. -1.]2×2 Matrix{Float64}: 1.0   0.0 0.0  -1.0julia> B = [0. 1.; 1. 0.]2×2 Matrix{Float64}: 0.0  1.0 1.0  0.0julia> eigvals!(A, B)2-element Vector{ComplexF64}: 0.0 - 1.0im 0.0 + 1.0imjulia> A2×2 Matrix{Float64}: -0.0  -1.0  1.0  -0.0julia> B2×2 Matrix{Float64}: 1.0  0.0 0.0  1.0
eigvals!(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> values

Same aseigvals, but saves space by overwriting the inputA, instead of creating a copy.irange is a range of eigenvalueindices to search for - for instance, the 2nd to 8th eigenvalues.

eigvals!(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> values

Same aseigvals, but saves space by overwriting the inputA, instead of creating a copy.vl is the lower bound of the interval to search for eigenvalues, andvu is the upper bound.

LinearAlgebra.eigmaxFunction
eigmax(A; permute::Bool=true, scale::Bool=true)

Return the largest eigenvalue ofA. The optionpermute=true permutes the matrix to become closer to upper triangular, andscale=true scales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues ofA are complex, this method will fail, since complex numbers cannot be sorted.

Examples

julia> A = [0 im; -im 0]2×2 Matrix{Complex{Int64}}: 0+0im  0+1im 0-1im  0+0imjulia> eigmax(A)1.0julia> A = [0 im; -1 0]2×2 Matrix{Complex{Int64}}:  0+0im  0+1im -1+0im  0+0imjulia> eigmax(A)ERROR: DomainError with Complex{Int64}[0+0im 0+1im; -1+0im 0+0im]:`A` cannot have complex eigenvalues.Stacktrace:[...]
LinearAlgebra.eigminFunction
eigmin(A; permute::Bool=true, scale::Bool=true)

Return the smallest eigenvalue ofA. The optionpermute=true permutes the matrix to become closer to upper triangular, andscale=true scales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues ofA are complex, this method will fail, since complex numbers cannot be sorted.

Examples

julia> A = [0 im; -im 0]2×2 Matrix{Complex{Int64}}: 0+0im  0+1im 0-1im  0+0imjulia> eigmin(A)-1.0julia> A = [0 im; -1 0]2×2 Matrix{Complex{Int64}}:  0+0im  0+1im -1+0im  0+0imjulia> eigmin(A)ERROR: DomainError with Complex{Int64}[0+0im 0+1im; -1+0im 0+0im]:`A` cannot have complex eigenvalues.Stacktrace:[...]
LinearAlgebra.eigvecsFunction
eigvecs(A::SymTridiagonal[, eigvals]) -> Matrix

Return a matrixM whose columns are the eigenvectors ofA. (Thekth eigenvector can be obtained from the sliceM[:, k].)

If the optional vector of eigenvalueseigvals is specified,eigvecs returns the specific corresponding eigenvectors.

Examples

julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 1.0  2.0   ⋅ 2.0  2.0  3.0  ⋅   3.0  1.0julia> eigvals(A)3-element Vector{Float64}: -2.1400549446402604  1.0000000000000002  5.140054944640259julia> eigvecs(A)3×3 Matrix{Float64}:  0.418304  -0.83205      0.364299 -0.656749  -7.39009e-16  0.754109  0.627457   0.5547       0.546448julia> eigvecs(A, [1.])3×1 Matrix{Float64}:  0.8320502943378438  4.263514128092366e-17 -0.5547001962252291
eigvecs(A; permute::Bool=true, scale::Bool=true, `sortby`) -> Matrix

Return a matrixM whose columns are the eigenvectors ofA. (Thekth eigenvector can be obtained from the sliceM[:, k].) Thepermute,scale, andsortby keywords are the same as foreigen.

Examples

julia> eigvecs([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])3×3 Matrix{Float64}: 1.0  0.0  0.0 0.0  1.0  0.0 0.0  0.0  1.0
eigvecs(A, B) -> Matrix

Return a matrixM whose columns are the generalized eigenvectors ofA andB. (Thekth eigenvector can be obtained from the sliceM[:, k].)

Examples

julia> A = [1 0; 0 -1]2×2 Matrix{Int64}: 1   0 0  -1julia> B = [0 1; 1 0]2×2 Matrix{Int64}: 0  1 1  0julia> eigvecs(A, B)2×2 Matrix{ComplexF64}:  0.0+1.0im   0.0-1.0im -1.0+0.0im  -1.0-0.0im
LinearAlgebra.eigenFunction
eigen(A; permute::Bool=true, scale::Bool=true, sortby) -> Eigen

Compute the eigenvalue decomposition ofA, returning anEigen factorization objectF which contains the eigenvalues inF.values and the eigenvectors in the columns of the matrixF.vectors. This corresponds to solving an eigenvalue problem of the formAx = λx, whereA is a matrix,x is an eigenvector, andλ is an eigenvalue. (Thekth eigenvector can be obtained from the sliceF.vectors[:, k].)

Iterating the decomposition produces the componentsF.values andF.vectors.

The following functions are available forEigen objects:inv,det, andisposdef.

For general nonsymmetric matrices it is possible to specify how the matrix is balanced before the eigenvector calculation. The optionpermute=true permutes the matrix to become closer to upper triangular, andscale=true scales the matrix by its diagonal elements to make rows and columns more equal in norm. The default istrue for both options.

By default, the eigenvalues and vectors are sorted lexicographically by(real(λ),imag(λ)). A different comparison functionby(λ) can be passed tosortby, or you can passsortby=nothing to leave the eigenvalues in an arbitrary order. Some special matrix types (e.g.Diagonal orSymTridiagonal) may implement their own sorting convention and not accept asortby keyword.

Examples

julia> F = eigen([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}values:3-element Vector{Float64}:  1.0  3.0 18.0vectors:3×3 Matrix{Float64}: 1.0  0.0  0.0 0.0  1.0  0.0 0.0  0.0  1.0julia> F.values3-element Vector{Float64}:  1.0  3.0 18.0julia> F.vectors3×3 Matrix{Float64}: 1.0  0.0  0.0 0.0  1.0  0.0 0.0  0.0  1.0julia> vals, vecs = F; # destructuring via iterationjulia> vals == F.values && vecs == F.vectorstrue
eigen(A, B; sortby) -> GeneralizedEigen

Compute the generalized eigenvalue decomposition ofA andB, returning aGeneralizedEigen factorization objectF which contains the generalized eigenvalues inF.values and the generalized eigenvectors in the columns of the matrixF.vectors. This corresponds to solving a generalized eigenvalue problem of the formAx = λBx, whereA, B are matrices,x is an eigenvector, andλ is an eigenvalue. (Thekth generalized eigenvector can be obtained from the sliceF.vectors[:, k].)

Iterating the decomposition produces the componentsF.values andF.vectors.

By default, the eigenvalues and vectors are sorted lexicographically by(real(λ),imag(λ)). A different comparison functionby(λ) can be passed tosortby, or you can passsortby=nothing to leave the eigenvalues in an arbitrary order.

Examples

julia> A = [1 0; 0 -1]2×2 Matrix{Int64}: 1   0 0  -1julia> B = [0 1; 1 0]2×2 Matrix{Int64}: 0  1 1  0julia> F = eigen(A, B);julia> F.values2-element Vector{ComplexF64}: 0.0 - 1.0im 0.0 + 1.0imjulia> F.vectors2×2 Matrix{ComplexF64}:  0.0+1.0im   0.0-1.0im -1.0+0.0im  -1.0-0.0imjulia> vals, vecs = F; # destructuring via iterationjulia> vals == F.values && vecs == F.vectorstrue
eigen(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> Eigen

Compute the eigenvalue decomposition ofA, returning anEigen factorization objectF which contains the eigenvalues inF.values and the eigenvectors in the columns of the matrixF.vectors. (Thekth eigenvector can be obtained from the sliceF.vectors[:, k].)

Iterating the decomposition produces the componentsF.values andF.vectors.

The following functions are available forEigen objects:inv,det, andisposdef.

TheUnitRangeirange specifies indices of the sorted eigenvalues to search for.

Note

Ifirange is not1:n, wheren is the dimension ofA, then the returned factorization will be atruncated factorization.

eigen(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> Eigen

Compute the eigenvalue decomposition ofA, returning anEigen factorization objectF which contains the eigenvalues inF.values and the eigenvectors in the columns of the matrixF.vectors. (Thekth eigenvector can be obtained from the sliceF.vectors[:, k].)

Iterating the decomposition produces the componentsF.values andF.vectors.

The following functions are available forEigen objects:inv,det, andisposdef.

vl is the lower bound of the window of eigenvalues to search for, andvu is the upper bound.

Note

If [vl,vu] does not contain all eigenvalues ofA, then the returned factorization will be atruncated factorization.

LinearAlgebra.eigen!Function
eigen!(A; permute, scale, sortby)eigen!(A, B; sortby)

Same aseigen, but saves space by overwriting the inputA (andB), instead of creating a copy.

LinearAlgebra.HessenbergType
Hessenberg <: Factorization

AHessenberg object represents the Hessenberg factorizationQHQ' of a square matrix, or a shiftQ(H+μI)Q' thereof, which is produced by thehessenberg function.

LinearAlgebra.hessenbergFunction
hessenberg(A) -> Hessenberg

Compute the Hessenberg decomposition ofA and return aHessenberg object. IfF is the factorization object, the unitary matrix can be accessed withF.Q (of typeLinearAlgebra.HessenbergQ) and the Hessenberg matrix withF.H (of typeUpperHessenberg), either of which may be converted to a regular matrix withMatrix(F.H) orMatrix(F.Q).

IfA isHermitian or real-Symmetric, then the Hessenberg decomposition produces a real-symmetric tridiagonal matrix andF.H is of typeSymTridiagonal.

Note that the shifted factorizationA+μI = Q (H+μI) Q' can be constructed efficiently byF + μ*I using theUniformScaling objectI, which creates a newHessenberg object with shared storage and a modified shift. The shift of a givenF is obtained byF.μ. This is useful because multiple shifted solves(F + μ*I) \ b (for differentμ and/orb) can be performed efficiently onceF is created.

Iterating the decomposition produces the factorsF.Q, F.H, F.μ.

Examples

julia> A = [4. 9. 7.; 4. 4. 1.; 4. 3. 2.]3×3 Matrix{Float64}: 4.0  9.0  7.0 4.0  4.0  1.0 4.0  3.0  2.0julia> F = hessenberg(A)Hessenberg{Float64, UpperHessenberg{Float64, Matrix{Float64}}, Matrix{Float64}, Vector{Float64}, Bool}Q factor: 3×3 LinearAlgebra.HessenbergQ{Float64, Matrix{Float64}, Vector{Float64}, false}H factor:3×3 UpperHessenberg{Float64, Matrix{Float64}}:  4.0      -11.3137       -1.41421 -5.65685    5.0           2.0   ⋅        -8.88178e-16   1.0julia> F.Q * F.H * F.Q'3×3 Matrix{Float64}: 4.0  9.0  7.0 4.0  4.0  1.0 4.0  3.0  2.0julia> q, h = F; # destructuring via iterationjulia> q == F.Q && h == F.Htrue
LinearAlgebra.hessenberg!Function
hessenberg!(A) -> Hessenberg

hessenberg! is the same ashessenberg, but saves space by overwriting the inputA, instead of creating a copy.

LinearAlgebra.SchurType
Schur <: Factorization

Matrix factorization type of the Schur factorization of a matrixA. This is the return type ofschur(_), the corresponding matrix factorization function.

IfF::Schur is the factorization object, the (quasi) triangular Schur factor can be obtained via eitherF.Schur orF.T and the orthogonal/unitary Schur vectors viaF.vectors orF.Z such thatA = F.vectors * F.Schur * F.vectors'. The eigenvalues ofA can be obtained withF.values.

Iterating the decomposition produces the componentsF.T,F.Z, andF.values.

Examples

julia> A = [5. 7.; -2. -4.]2×2 Matrix{Float64}:  5.0   7.0 -2.0  -4.0julia> F = schur(A)Schur{Float64, Matrix{Float64}, Vector{Float64}}T factor:2×2 Matrix{Float64}: 3.0   9.0 0.0  -2.0Z factor:2×2 Matrix{Float64}:  0.961524  0.274721 -0.274721  0.961524eigenvalues:2-element Vector{Float64}:  3.0 -2.0julia> F.vectors * F.Schur * F.vectors'2×2 Matrix{Float64}:  5.0   7.0 -2.0  -4.0julia> t, z, vals = F; # destructuring via iterationjulia> t == F.T && z == F.Z && vals == F.valuestrue
LinearAlgebra.GeneralizedSchurType
GeneralizedSchur <: Factorization

Matrix factorization type of the generalized Schur factorization of two matricesA andB. This is the return type ofschur(_, _), the corresponding matrix factorization function.

IfF::GeneralizedSchur is the factorization object, the (quasi) triangular Schur factors can be obtained viaF.S andF.T, the left unitary/orthogonal Schur vectors viaF.left orF.Q, and the right unitary/orthogonal Schur vectors can be obtained withF.right orF.Z such thatA=F.left*F.S*F.right' andB=F.left*F.T*F.right'. The generalized eigenvalues ofA andB can be obtained withF.α./F.β.

Iterating the decomposition produces the componentsF.S,F.T,F.Q,F.Z,F.α, andF.β.

LinearAlgebra.schurFunction
schur(A) -> F::Schur

Computes the Schur factorization of the matrixA. The (quasi) triangular Schur factor can be obtained from theSchur objectF with eitherF.Schur orF.T and the orthogonal/unitary Schur vectors can be obtained withF.vectors orF.Z such thatA = F.vectors * F.Schur * F.vectors'. The eigenvalues ofA can be obtained withF.values.

For realA, the Schur factorization is "quasitriangular", which means that it is upper-triangular except with 2×2 diagonal blocks for any conjugate pair of complex eigenvalues; this allows the factorization to be purely real even when there are complex eigenvalues. To obtain the (complex) purely upper-triangular Schur factorization from a real quasitriangular factorization, you can useSchur{Complex}(schur(A)).

Iterating the decomposition produces the componentsF.T,F.Z, andF.values.

Examples

julia> A = [5. 7.; -2. -4.]2×2 Matrix{Float64}:  5.0   7.0 -2.0  -4.0julia> F = schur(A)Schur{Float64, Matrix{Float64}, Vector{Float64}}T factor:2×2 Matrix{Float64}: 3.0   9.0 0.0  -2.0Z factor:2×2 Matrix{Float64}:  0.961524  0.274721 -0.274721  0.961524eigenvalues:2-element Vector{Float64}:  3.0 -2.0julia> F.vectors * F.Schur * F.vectors'2×2 Matrix{Float64}:  5.0   7.0 -2.0  -4.0julia> t, z, vals = F; # destructuring via iterationjulia> t == F.T && z == F.Z && vals == F.valuestrue
schur(A, B) -> F::GeneralizedSchur

Computes the Generalized Schur (or QZ) factorization of the matricesA andB. The (quasi) triangular Schur factors can be obtained from theSchur objectF withF.S andF.T, the left unitary/orthogonal Schur vectors can be obtained withF.left orF.Q and the right unitary/orthogonal Schur vectors can be obtained withF.right orF.Z such thatA=F.left*F.S*F.right' andB=F.left*F.T*F.right'. The generalized eigenvalues ofA andB can be obtained withF.α./F.β.

Iterating the decomposition produces the componentsF.S,F.T,F.Q,F.Z,F.α, andF.β.

LinearAlgebra.schur!Function
schur!(A) -> F::Schur

Same asschur but uses the input argumentA as workspace.

Examples

julia> A = [5. 7.; -2. -4.]2×2 Matrix{Float64}:  5.0   7.0 -2.0  -4.0julia> F = schur!(A)Schur{Float64, Matrix{Float64}, Vector{Float64}}T factor:2×2 Matrix{Float64}: 3.0   9.0 0.0  -2.0Z factor:2×2 Matrix{Float64}:  0.961524  0.274721 -0.274721  0.961524eigenvalues:2-element Vector{Float64}:  3.0 -2.0julia> A2×2 Matrix{Float64}: 3.0   9.0 0.0  -2.0
schur!(A::StridedMatrix, B::StridedMatrix) -> F::GeneralizedSchur

Same asschur but uses the input matricesA andB as workspace.

LinearAlgebra.ordschurFunction
ordschur(F::Schur, select::Union{Vector{Bool},BitVector}) -> F::Schur

Reorders the Schur factorizationF of a matrixA = Z*T*Z' according to the logical arrayselect returning the reordered factorizationF object. The selected eigenvalues appear in the leading diagonal ofF.Schur and the corresponding leading columns ofF.vectors form an orthogonal/unitary basis of the corresponding right invariant subspace. In the real case, a complex conjugate pair of eigenvalues must be either both included or both excluded viaselect.

ordschur(F::GeneralizedSchur, select::Union{Vector{Bool},BitVector}) -> F::GeneralizedSchur

Reorders the Generalized Schur factorizationF of a matrix pair(A, B) = (Q*S*Z', Q*T*Z') according to the logical arrayselect and returns a GeneralizedSchur objectF. The selected eigenvalues appear in the leading diagonal of bothF.S andF.T, and the left and right orthogonal/unitary Schur vectors are also reordered such that(A, B) = F.Q*(F.S, F.T)*F.Z' still holds and the generalized eigenvalues ofA andB can still be obtained withF.α./F.β.

LinearAlgebra.ordschur!Function
ordschur!(F::Schur, select::Union{Vector{Bool},BitVector}) -> F::Schur

Same asordschur but overwrites the factorizationF.

ordschur!(F::GeneralizedSchur, select::Union{Vector{Bool},BitVector}) -> F::GeneralizedSchur

Same asordschur but overwrites the factorizationF.

LinearAlgebra.SVDType
SVD <: Factorization

Matrix factorization type of the singular value decomposition (SVD) of a matrixA. This is the return type ofsvd(_), the corresponding matrix factorization function.

IfF::SVD is the factorization object,U,S,V andVt can be obtained viaF.U,F.S,F.V andF.Vt, such thatA = U * Diagonal(S) * Vt. The singular values inS are sorted in descending order.

Iterating the decomposition produces the componentsU,S, andV.

Examples

julia> A = [1. 0. 0. 0. 2.; 0. 0. 3. 0. 0.; 0. 0. 0. 0. 0.; 0. 2. 0. 0. 0.]4×5 Matrix{Float64}: 1.0  0.0  0.0  0.0  2.0 0.0  0.0  3.0  0.0  0.0 0.0  0.0  0.0  0.0  0.0 0.0  2.0  0.0  0.0  0.0julia> F = svd(A)SVD{Float64, Float64, Matrix{Float64}, Vector{Float64}}U factor:4×4 Matrix{Float64}: 0.0  1.0   0.0  0.0 1.0  0.0   0.0  0.0 0.0  0.0   0.0  1.0 0.0  0.0  -1.0  0.0singular values:4-element Vector{Float64}: 3.0 2.23606797749979 2.0 0.0Vt factor:4×5 Matrix{Float64}: -0.0        0.0  1.0  -0.0  0.0  0.447214   0.0  0.0   0.0  0.894427  0.0       -1.0  0.0   0.0  0.0  0.0        0.0  0.0   1.0  0.0julia> F.U * Diagonal(F.S) * F.Vt4×5 Matrix{Float64}: 1.0  0.0  0.0  0.0  2.0 0.0  0.0  3.0  0.0  0.0 0.0  0.0  0.0  0.0  0.0 0.0  2.0  0.0  0.0  0.0julia> u, s, v = F; # destructuring via iterationjulia> u == F.U && s == F.S && v == F.Vtrue
LinearAlgebra.GeneralizedSVDType
GeneralizedSVD <: Factorization

Matrix factorization type of the generalized singular value decomposition (SVD) of two matricesA andB, such thatA = F.U*F.D1*F.R0*F.Q' andB = F.V*F.D2*F.R0*F.Q'. This is the return type ofsvd(_, _), the corresponding matrix factorization function.

For an M-by-N matrixA and P-by-N matrixB,

  • U is a M-by-M orthogonal matrix,
  • V is a P-by-P orthogonal matrix,
  • Q is a N-by-N orthogonal matrix,
  • D1 is a M-by-(K+L) diagonal matrix with 1s in the first K entries,
  • D2 is a P-by-(K+L) matrix whose top right L-by-L block is diagonal,
  • R0 is a (K+L)-by-N matrix whose rightmost (K+L)-by-(K+L) block is nonsingular upper block triangular,

K+L is the effective numerical rank of the matrix[A; B].

Iterating the decomposition produces the componentsU,V,Q,D1,D2, andR0.

The entries ofF.D1 andF.D2 are related, as explained in the LAPACK documentation for thegeneralized SVD and thexGGSVD3 routine which is called underneath (in LAPACK 3.6.0 and newer).

Examples

julia> A = [1. 0.; 0. -1.]2×2 Matrix{Float64}: 1.0   0.0 0.0  -1.0julia> B = [0. 1.; 1. 0.]2×2 Matrix{Float64}: 0.0  1.0 1.0  0.0julia> F = svd(A, B)GeneralizedSVD{Float64, Matrix{Float64}, Float64, Vector{Float64}}U factor:2×2 Matrix{Float64}: 1.0  0.0 0.0  1.0V factor:2×2 Matrix{Float64}: -0.0  -1.0  1.0   0.0Q factor:2×2 Matrix{Float64}: 1.0  0.0 0.0  1.0D1 factor:2×2 Matrix{Float64}: 0.707107  0.0 0.0       0.707107D2 factor:2×2 Matrix{Float64}: 0.707107  0.0 0.0       0.707107R0 factor:2×2 Matrix{Float64}: 1.41421   0.0 0.0      -1.41421julia> F.U*F.D1*F.R0*F.Q'2×2 Matrix{Float64}: 1.0   0.0 0.0  -1.0julia> F.V*F.D2*F.R0*F.Q'2×2 Matrix{Float64}: -0.0  1.0  1.0  0.0
LinearAlgebra.svdFunction
svd(A; full::Bool = false, alg::Algorithm = default_svd_alg(A)) -> SVD

Compute the singular value decomposition (SVD) ofA and return anSVD object.

U,S,V andVt can be obtained from the factorizationF withF.U,F.S,F.V andF.Vt, such thatA = U * Diagonal(S) * Vt. The algorithm producesVt and henceVt is more efficient to extract thanV. The singular values inS are sorted in descending order.

Iterating the decomposition produces the componentsU,S, andV.

Iffull = false (default), a "thin" SVD is returned. For an$M \times N$ matrixA, in the full factorizationU is$M \times M$ andV is$N \times N$, while in the thin factorizationU is$M \times K$ andV is$N \times K$, where$K = \min(M,N)$ is the number of singular values.

Ifalg = DivideAndConquer() a divide-and-conquer algorithm is used to calculate the SVD. Another (typically slower but more accurate) option isalg = QRIteration().

Julia 1.3

Thealg keyword argument requires Julia 1.3 or later.

Examples

julia> A = rand(4,3);julia> F = svd(A); # Store the Factorization Objectjulia> A ≈ F.U * Diagonal(F.S) * F.Vttruejulia> U, S, V = F; # destructuring via iterationjulia> A ≈ U * Diagonal(S) * V'truejulia> Uonly, = svd(A); # Store U onlyjulia> Uonly == Utrue
svd(A, B) -> GeneralizedSVD

Compute the generalized SVD ofA andB, returning aGeneralizedSVD factorization objectF such that[A;B] = [F.U * F.D1; F.V * F.D2] * F.R0 * F.Q'

  • U is a M-by-M orthogonal matrix,
  • V is a P-by-P orthogonal matrix,
  • Q is a N-by-N orthogonal matrix,
  • D1 is a M-by-(K+L) diagonal matrix with 1s in the first K entries,
  • D2 is a P-by-(K+L) matrix whose top right L-by-L block is diagonal,
  • R0 is a (K+L)-by-N matrix whose rightmost (K+L)-by-(K+L) block is nonsingular upper block triangular,

K+L is the effective numerical rank of the matrix[A; B].

Iterating the decomposition produces the componentsU,V,Q,D1,D2, andR0.

The generalized SVD is used in applications such as when one wants to compare how much belongs toA vs. how much belongs toB, as in human vs yeast genome, or signal vs noise, or between clusters vs within clusters. (See Edelman and Wang for discussion: https://arxiv.org/abs/1901.00485)

It decomposes[A; B] into[UC; VS]H, where[UC; VS] is a natural orthogonal basis for the column space of[A; B], andH = RQ' is a natural non-orthogonal basis for the rowspace of[A;B], where the top rows are most closely attributed to theA matrix, and the bottom to theB matrix. The multi-cosine/sine matricesC andS provide a multi-measure of how muchA vs how muchB, andU andV provide directions in which these are measured.

Examples

julia> A = randn(3,2); B=randn(4,2);julia> F = svd(A, B);julia> U,V,Q,C,S,R = F;julia> H = R*Q';julia> [A; B] ≈ [U*C; V*S]*Htruejulia> [A; B] ≈ [F.U*F.D1; F.V*F.D2]*F.R0*F.Q'truejulia> Uonly, = svd(A,B);julia> U == Uonlytrue
LinearAlgebra.svd!Function
svd!(A; full::Bool = false, alg::Algorithm = default_svd_alg(A)) -> SVD

svd! is the same assvd, but saves space by overwriting the inputA, instead of creating a copy. See documentation ofsvd for details.

svd!(A, B) -> GeneralizedSVD

svd! is the same assvd, but modifies the argumentsA andB in-place, instead of making copies. See documentation ofsvd for details.

LinearAlgebra.svdvalsFunction
svdvals(A)

Return the singular values ofA in descending order.

Examples

julia> A = [1. 0. 0. 0. 2.; 0. 0. 3. 0. 0.; 0. 0. 0. 0. 0.; 0. 2. 0. 0. 0.]4×5 Matrix{Float64}: 1.0  0.0  0.0  0.0  2.0 0.0  0.0  3.0  0.0  0.0 0.0  0.0  0.0  0.0  0.0 0.0  2.0  0.0  0.0  0.0julia> svdvals(A)4-element Vector{Float64}: 3.0 2.23606797749979 2.0 0.0
svdvals(A, B)

Return the generalized singular values from the generalized singular value decomposition ofA andB. See alsosvd.

Examples

julia> A = [1. 0.; 0. -1.]2×2 Matrix{Float64}: 1.0   0.0 0.0  -1.0julia> B = [0. 1.; 1. 0.]2×2 Matrix{Float64}: 0.0  1.0 1.0  0.0julia> svdvals(A, B)2-element Vector{Float64}: 1.0 1.0
LinearAlgebra.svdvals!Function
svdvals!(A)

Return the singular values ofA, saving space by overwriting the input. See alsosvdvals andsvd.

svdvals!(A, B)

Return the generalized singular values from the generalized singular value decomposition ofA andB, saving space by overwritingA andB. See alsosvd andsvdvals.

LinearAlgebra.GivensType
LinearAlgebra.Givens(i1,i2,c,s) -> G

A Givens rotation linear operator. The fieldsc ands represent the cosine and sine of the rotation angle, respectively. TheGivens type supports left multiplicationG*A and conjugated transpose right multiplicationA*G'. The type doesn't have asize and can therefore be multiplied with matrices of arbitrary size as long asi2<=size(A,2) forG*A ori2<=size(A,1) forA*G'.

See alsogivens.

LinearAlgebra.givensFunction
givens(f::T, g::T, i1::Integer, i2::Integer) where {T} -> (G::Givens, r::T)

Computes the Givens rotationG and scalarr such that for any vectorx where

x[i1] = fx[i2] = g

the result of the multiplication

y = G*x

has the property that

y[i1] = ry[i2] = 0

See alsoLinearAlgebra.Givens.

givens(A::AbstractArray, i1::Integer, i2::Integer, j::Integer) -> (G::Givens, r)

Computes the Givens rotationG and scalarr such that the result of the multiplication

B = G*A

has the property that

B[i1,j] = rB[i2,j] = 0

See alsoLinearAlgebra.Givens.

givens(x::AbstractVector, i1::Integer, i2::Integer) -> (G::Givens, r)

Computes the Givens rotationG and scalarr such that the result of the multiplication

B = G*x

has the property that

B[i1] = rB[i2] = 0

See alsoLinearAlgebra.Givens.

LinearAlgebra.triuFunction
triu(M)

Upper triangle of a matrix.

Examples

julia> a = fill(1.0, (4,4))4×4 Matrix{Float64}: 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0julia> triu(a)4×4 Matrix{Float64}: 1.0  1.0  1.0  1.0 0.0  1.0  1.0  1.0 0.0  0.0  1.0  1.0 0.0  0.0  0.0  1.0
triu(M, k::Integer)

Return the upper triangle ofM starting from thekth superdiagonal.

Examples

julia> a = fill(1.0, (4,4))4×4 Matrix{Float64}: 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0julia> triu(a,3)4×4 Matrix{Float64}: 0.0  0.0  0.0  1.0 0.0  0.0  0.0  0.0 0.0  0.0  0.0  0.0 0.0  0.0  0.0  0.0julia> triu(a,-3)4×4 Matrix{Float64}: 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0
LinearAlgebra.triu!Function
triu!(M)

Upper triangle of a matrix, overwritingM in the process. See alsotriu.

triu!(M, k::Integer)

Return the upper triangle ofM starting from thekth superdiagonal, overwritingM in the process.

Examples

julia> M = [1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5]5×5 Matrix{Int64}: 1  2  3  4  5 1  2  3  4  5 1  2  3  4  5 1  2  3  4  5 1  2  3  4  5julia> triu!(M, 1)5×5 Matrix{Int64}: 0  2  3  4  5 0  0  3  4  5 0  0  0  4  5 0  0  0  0  5 0  0  0  0  0
LinearAlgebra.trilFunction
tril(M)

Lower triangle of a matrix.

Examples

julia> a = fill(1.0, (4,4))4×4 Matrix{Float64}: 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0julia> tril(a)4×4 Matrix{Float64}: 1.0  0.0  0.0  0.0 1.0  1.0  0.0  0.0 1.0  1.0  1.0  0.0 1.0  1.0  1.0  1.0
tril(M, k::Integer)

Return the lower triangle ofM starting from thekth superdiagonal.

Examples

julia> a = fill(1.0, (4,4))4×4 Matrix{Float64}: 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0julia> tril(a,3)4×4 Matrix{Float64}: 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0 1.0  1.0  1.0  1.0julia> tril(a,-3)4×4 Matrix{Float64}: 0.0  0.0  0.0  0.0 0.0  0.0  0.0  0.0 0.0  0.0  0.0  0.0 1.0  0.0  0.0  0.0
LinearAlgebra.tril!Function
tril!(M)

Lower triangle of a matrix, overwritingM in the process. See alsotril.

tril!(M, k::Integer)

Return the lower triangle ofM starting from thekth superdiagonal, overwritingM in the process.

Examples

julia> M = [1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5]5×5 Matrix{Int64}: 1  2  3  4  5 1  2  3  4  5 1  2  3  4  5 1  2  3  4  5 1  2  3  4  5julia> tril!(M, 2)5×5 Matrix{Int64}: 1  2  3  0  0 1  2  3  4  0 1  2  3  4  5 1  2  3  4  5 1  2  3  4  5
LinearAlgebra.diagindFunction
diagind(M::AbstractMatrix, k::Integer = 0, indstyle::IndexStyle = IndexLinear())diagind(M::AbstractMatrix, indstyle::IndexStyle = IndexLinear())

AnAbstractRange giving the indices of thekth diagonal of the matrixM. Optionally, an index style may be specified which determines the type of the range returned. Ifindstyle isa IndexLinear (default), this returns anAbstractRange{Integer}. On the other hand, ifindstyle isa IndexCartesian, this returns anAbstractRange{CartesianIndex{2}}.

Ifk is not provided, it is assumed to be0 (corresponding to the main diagonal).

See also:diag,diagm,Diagonal.

Examples

julia> A = [1 2 3; 4 5 6; 7 8 9]3×3 Matrix{Int64}: 1  2  3 4  5  6 7  8  9julia> diagind(A, -1)2:4:6julia> diagind(A, IndexCartesian())StepRangeLen(CartesianIndex(1, 1), CartesianIndex(1, 1), 3)
Julia 1.11

Specifying anIndexStyle requires at least Julia 1.11.

LinearAlgebra.diagFunction
diag(M, k::Integer=0)

Thekth diagonal of a matrix, as a vector.

See alsodiagm,diagind,Diagonal,isdiag.

Examples

julia> A = [1 2 3; 4 5 6; 7 8 9]3×3 Matrix{Int64}: 1  2  3 4  5  6 7  8  9julia> diag(A,1)2-element Vector{Int64}: 2 6
LinearAlgebra.diagmFunction
diagm(kv::Pair{<:Integer,<:AbstractVector}...)diagm(m::Integer, n::Integer, kv::Pair{<:Integer,<:AbstractVector}...)

Construct a matrix fromPairs of diagonals and vectors. Vectorkv.second will be placed on thekv.first diagonal. By default the matrix is square and its size is inferred fromkv, but a non-square sizem×n (padded with zeros as needed) can be specified by passingm,n as the first arguments. For repeated diagonal indiceskv.first the values in the corresponding vectorskv.second will be added.

diagm constructs a full matrix; if you want storage-efficient versions with fast arithmetic, seeDiagonal,BidiagonalTridiagonal andSymTridiagonal.

Examples

julia> diagm(1 => [1,2,3])4×4 Matrix{Int64}: 0  1  0  0 0  0  2  0 0  0  0  3 0  0  0  0julia> diagm(1 => [1,2,3], -1 => [4,5])4×4 Matrix{Int64}: 0  1  0  0 4  0  2  0 0  5  0  3 0  0  0  0julia> diagm(1 => [1,2,3], 1 => [1,2,3])4×4 Matrix{Int64}: 0  2  0  0 0  0  4  0 0  0  0  6 0  0  0  0
diagm(v::AbstractVector)diagm(m::Integer, n::Integer, v::AbstractVector)

Construct a matrix with elements of the vector as diagonal elements. By default, the matrix is square and its size is given bylength(v), but a non-square sizem×n can be specified by passingm,n as the first arguments.

Examples

julia> diagm([1,2,3])3×3 Matrix{Int64}: 1  0  0 0  2  0 0  0  3
LinearAlgebra.rankFunction
rank(::QRSparse{Tv,Ti}) -> Ti

Return the rank of the QR factorization

rank(S::SparseMatrixCSC{Tv,Ti}; [tol::Real]) -> Ti

Calculate rank ofS by calculating its QR factorization. Values smaller thantol are considered as zero. See SPQR's manual.

rank(A::AbstractMatrix; atol::Real=0, rtol::Real=atol>0 ? 0 : n*ϵ)rank(A::AbstractMatrix, rtol::Real)

Compute the numerical rank of a matrix by counting how many outputs ofsvdvals(A) are greater thanmax(atol, rtol*σ₁) whereσ₁ isA's largest calculated singular value.atol andrtol are the absolute and relative tolerances, respectively. The default relative tolerance isn*ϵ, wheren is the size of the smallest dimension ofA, andϵ is theeps of the element type ofA.

Note

Numerical rank can be a sensitive and imprecise characterization of ill-conditioned matrices with singular values that are close to the threshold tolerancemax(atol, rtol*σ₁). In such cases, slight perturbations to the singular-value computation or to the matrix can change the result ofrank by pushing one or more singular values across the threshold. These variations can even occur due to changes in floating-point errors between different Julia versions, architectures, compilers, or operating systems.

Julia 1.1

Theatol andrtol keyword arguments requires at least Julia 1.1. In Julia 1.0rtol is available as a positional argument, but this will be deprecated in Julia 2.0.

Examples

julia> rank(Matrix(I, 3, 3))3julia> rank(diagm(0 => [1, 0, 2]))2julia> rank(diagm(0 => [1, 0.001, 2]), rtol=0.1)2julia> rank(diagm(0 => [1, 0.001, 2]), rtol=0.00001)3julia> rank(diagm(0 => [1, 0.001, 2]), atol=1.5)1
LinearAlgebra.normFunction
norm(A, p::Real=2)

For any iterable containerA (including arrays of any dimension) of numbers (or any element type for whichnorm is defined), compute thep-norm (defaulting top=2) as ifA were a vector of the corresponding length.

Thep-norm is defined as

\[\|A\|_p = \left( \sum_{i=1}^n | a_i | ^p \right)^{1/p}\]

with$a_i$ the entries of$A$,$| a_i |$ thenorm of$a_i$, and$n$ the length of$A$. Since thep-norm is computed using thenorms of the entries ofA, thep-norm of a vector of vectors is not compatible with the interpretation of it as a block vector in general ifp != 2.

p can assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular,norm(A, Inf) returns the largest value inabs.(A), whereasnorm(A, -Inf) returns the smallest. IfA is a matrix andp=2, then this is equivalent to the Frobenius norm.

The second argumentp is not necessarily a part of the interface fornorm, i.e. a custom type may only implementnorm(A) without second argument.

Useopnorm to compute the operator norm of a matrix.

Examples

julia> v = [3, -2, 6]3-element Vector{Int64}:  3 -2  6julia> norm(v)7.0julia> norm(v, 1)11.0julia> norm(v, Inf)6.0julia> norm([1 2 3; 4 5 6; 7 8 9])16.881943016134134julia> norm([1 2 3 4 5 6 7 8 9])16.881943016134134julia> norm(1:9)16.881943016134134julia> norm(hcat(v,v), 1) == norm(vcat(v,v), 1) != norm([v,v], 1)truejulia> norm(hcat(v,v), 2) == norm(vcat(v,v), 2) == norm([v,v], 2)truejulia> norm(hcat(v,v), Inf) == norm(vcat(v,v), Inf) != norm([v,v], Inf)true
norm(x::Number, p::Real=2)

For numbers, return$\left( |x|^p \right)^{1/p}$.

Examples

julia> norm(2, 1)2.0julia> norm(-2, 1)2.0julia> norm(2, 2)2.0julia> norm(-2, 2)2.0julia> norm(2, Inf)2.0julia> norm(-2, Inf)2.0
LinearAlgebra.opnormFunction
opnorm(A::AbstractMatrix, p::Real=2)

Compute the operator norm (or matrix norm) induced by the vectorp-norm, where valid values ofp are1,2, orInf. (Note that for sparse matrices,p=2 is currently not implemented.) Usenorm to compute the Frobenius norm.

Whenp=1, the operator norm is the maximum absolute column sum ofA:

\[\|A\|_1 = \max_{1 ≤ j ≤ n} \sum_{i=1}^m | a_{ij} |\]

with$a_{ij}$ the entries of$A$, and$m$ and$n$ its dimensions.

Whenp=2, the operator norm is the spectral norm, equal to the largest singular value ofA.

Whenp=Inf, the operator norm is the maximum absolute row sum ofA:

\[\|A\|_\infty = \max_{1 ≤ i ≤ m} \sum _{j=1}^n | a_{ij} |\]

Examples

julia> A = [1 -2 -3; 2 3 -1]2×3 Matrix{Int64}: 1  -2  -3 2   3  -1julia> opnorm(A, Inf)6.0julia> opnorm(A, 1)5.0
opnorm(x::Number, p::Real=2)

For numbers, return$\left( |x|^p \right)^{1/p}$. This is equivalent tonorm.

opnorm(A::Adjoint{<:Any,<:AbstractVector}, q::Real=2)opnorm(A::Transpose{<:Any,<:AbstractVector}, q::Real=2)

For Adjoint/Transpose-wrapped vectors, return the operator$q$-norm ofA, which is equivalent to thep-norm with valuep = q/(q-1). They coincide atp = q = 2. Usenorm to compute thep norm ofA as a vector.

The difference in norm between a vector space and its dual arises to preserve the relationship between duality and the dot product, and the result is consistent with the operatorp-norm of a1 × n matrix.

Examples

julia> v = [1; im];julia> vc = v';julia> opnorm(vc, 1)1.0julia> norm(vc, 1)2.0julia> norm(v, 1)2.0julia> opnorm(vc, 2)1.4142135623730951julia> norm(vc, 2)1.4142135623730951julia> norm(v, 2)1.4142135623730951julia> opnorm(vc, Inf)2.0julia> norm(vc, Inf)1.0julia> norm(v, Inf)1.0
LinearAlgebra.normalize!Function
normalize!(a::AbstractArray, p::Real=2)

Normalize the arraya in-place so that itsp-norm equals unity, i.e.norm(a, p) == 1. See alsonormalize andnorm.

LinearAlgebra.normalizeFunction
normalize(a, p::Real=2)

Normalizea so that itsp-norm equals unity, i.e.norm(a, p) == 1. For scalars, this is similar to sign(a), except normalize(0) = NaN. See alsonormalize!,norm, andsign.

Examples

julia> a = [1,2,4];julia> b = normalize(a)3-element Vector{Float64}: 0.2182178902359924 0.4364357804719848 0.8728715609439696julia> norm(b)1.0julia> c = normalize(a, 1)3-element Vector{Float64}: 0.14285714285714285 0.2857142857142857 0.5714285714285714julia> norm(c, 1)1.0julia> a = [1 2 4 ; 1 2 4]2×3 Matrix{Int64}: 1  2  4 1  2  4julia> norm(a)6.48074069840786julia> normalize(a)2×3 Matrix{Float64}: 0.154303  0.308607  0.617213 0.154303  0.308607  0.617213julia> normalize(3, 1)1.0julia> normalize(-8, 1)-1.0julia> normalize(0, 1)NaN
LinearAlgebra.condFunction
cond(M, p::Real=2)

Condition number of the matrixM, computed using the operatorp-norm. Valid values forp are1,2 (default), orInf.

LinearAlgebra.condskeelFunction
condskeel(M, [x, p::Real=Inf])

\[\kappa_S(M, p) = \left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \right\Vert_p \\\kappa_S(M, x, p) = \frac{\left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \left\vert x \right\vert \right\Vert_p}{\left \Vert x \right \Vert_p}\]

Skeel condition number$\kappa_S$ of the matrixM, optionally with respect to the vectorx, as computed using the operatorp-norm.$\left\vert M \right\vert$ denotes the matrix of (entry wise) absolute values of$M$;$\left\vert M \right\vert_{ij} = \left\vert M_{ij} \right\vert$. Valid values forp are1,2 andInf (default).

This quantity is also known in the literature as the Bauer condition number, relative condition number, or componentwise relative condition number.

LinearAlgebra.trFunction
tr(M)

Matrix trace. Sums the diagonal elements ofM.

Examples

julia> A = [1 2; 3 4]2×2 Matrix{Int64}: 1  2 3  4julia> tr(A)5
LinearAlgebra.detFunction
det(M)

Matrix determinant.

See also:logdet andlogabsdet.

Examples

julia> M = [1 0; 2 2]2×2 Matrix{Int64}: 1  0 2  2julia> det(M)2.0
LinearAlgebra.logdetFunction
logdet(M)

Logarithm of matrix determinant. Equivalent tolog(det(M)), but may provide increased accuracy and avoids overflow/underflow.

Examples

julia> M = [1 0; 2 2]2×2 Matrix{Int64}: 1  0 2  2julia> logdet(M)0.6931471805599453julia> logdet(Matrix(I, 3, 3))0.0
LinearAlgebra.logabsdetFunction
logabsdet(M)

Log of absolute value of matrix determinant. Equivalent to(log(abs(det(M))), sign(det(M))), but may provide increased accuracy and/or speed.

Examples

julia> A = [-1. 0.; 0. 1.]2×2 Matrix{Float64}: -1.0  0.0  0.0  1.0julia> det(A)-1.0julia> logabsdet(A)(0.0, -1.0)julia> B = [2. 0.; 0. 1.]2×2 Matrix{Float64}: 2.0  0.0 0.0  1.0julia> det(B)2.0julia> logabsdet(B)(0.6931471805599453, 1.0)
Base.invMethod
inv(M)

Matrix inverse. Computes matrixN such thatM * N = I, whereI is the identity matrix. Computed by solving the left-divisionN = M \ I.

Examples

julia> M = [2 5; 1 3]2×2 Matrix{Int64}: 2  5 1  3julia> N = inv(M)2×2 Matrix{Float64}:  3.0  -5.0 -1.0   2.0julia> M*N == N*M == Matrix(I, 2, 2)true
LinearAlgebra.pinvFunction
pinv(M; atol::Real=0, rtol::Real=atol>0 ? 0 : n*ϵ)pinv(M, rtol::Real) = pinv(M; rtol=rtol) # to be deprecated in Julia 2.0

Computes the Moore-Penrose pseudoinverse.

For matricesM with floating point elements, it is convenient to compute the pseudoinverse by inverting only singular values greater thanmax(atol, rtol*σ₁) whereσ₁ is the largest singular value ofM.

The optimal choice of absolute (atol) and relative tolerance (rtol) varies both with the value ofM and the intended application of the pseudoinverse. The default relative tolerance isn*ϵ, wheren is the size of the smallest dimension ofM, andϵ is theeps of the element type ofM.

For inverting dense ill-conditioned matrices in a least-squares sense,rtol = sqrt(eps(real(float(oneunit(eltype(M)))))) is recommended.

For more information, see[issue8859],[B96],[S84],[KY88].

Examples

julia> M = [1.5 1.3; 1.2 1.9]2×2 Matrix{Float64}: 1.5  1.3 1.2  1.9julia> N = pinv(M)2×2 Matrix{Float64}:  1.47287   -1.00775 -0.930233   1.16279julia> M * N2×2 Matrix{Float64}: 1.0          -2.22045e-16 4.44089e-16   1.0
LinearAlgebra.nullspaceFunction
nullspace(M; atol::Real=0, rtol::Real=atol>0 ? 0 : n*ϵ)nullspace(M, rtol::Real) = nullspace(M; rtol=rtol) # to be deprecated in Julia 2.0

Computes a basis for the nullspace ofM by including the singular vectors ofM whose singular values have magnitudes smaller thanmax(atol, rtol*σ₁), whereσ₁ isM's largest singular value.

By default, the relative tolerancertol isn*ϵ, wheren is the size of the smallest dimension ofM, andϵ is theeps of the element type ofM.

Examples

julia> M = [1 0 0; 0 1 0; 0 0 0]3×3 Matrix{Int64}: 1  0  0 0  1  0 0  0  0julia> nullspace(M)3×1 Matrix{Float64}: 0.0 0.0 1.0julia> nullspace(M, rtol=3)3×3 Matrix{Float64}: 0.0  1.0  0.0 1.0  0.0  0.0 0.0  0.0  1.0julia> nullspace(M, atol=0.95)3×1 Matrix{Float64}: 0.0 0.0 1.0
Base.kronFunction
kron(A, B)

Computes the Kronecker product of two vectors, matrices or numbers.

For real vectorsv andw, the Kronecker product is related to the outer product bykron(v,w) == vec(w * transpose(v)) orw * transpose(v) == reshape(kron(v,w), (length(w), length(v))). Note how the ordering ofv andw differs on the left and right of these expressions (due to column-major storage). For complex vectors, the outer productw * v' also differs by conjugation ofv.

Examples

julia> A = [1 2; 3 4]2×2 Matrix{Int64}: 1  2 3  4julia> B = [im 1; 1 -im]2×2 Matrix{Complex{Int64}}: 0+1im  1+0im 1+0im  0-1imjulia> kron(A, B)4×4 Matrix{Complex{Int64}}: 0+1im  1+0im  0+2im  2+0im 1+0im  0-1im  2+0im  0-2im 0+3im  3+0im  0+4im  4+0im 3+0im  0-3im  4+0im  0-4imjulia> v = [1, 2]; w = [3, 4, 5];julia> w*transpose(v)3×2 Matrix{Int64}: 3   6 4   8 5  10julia> reshape(kron(v,w), (length(w), length(v)))3×2 Matrix{Int64}: 3   6 4   8 5  10
Base.kron!Function
kron!(C, A, B)

Computes the Kronecker product ofA andB and stores the result inC, overwriting the existing content ofC. This is the in-place version ofkron.

Julia 1.6

This function requires Julia 1.6 or later.

Base.expMethod
exp(A::AbstractMatrix)

Compute the matrix exponential ofA, defined by

\[e^A = \sum_{n=0}^{\infty} \frac{A^n}{n!}.\]

For symmetric or HermitianA, an eigendecomposition (eigen) is used, otherwise the scaling and squaring algorithm (see[H05]) is chosen.

Examples

julia> A = Matrix(1.0I, 2, 2)2×2 Matrix{Float64}: 1.0  0.0 0.0  1.0julia> exp(A)2×2 Matrix{Float64}: 2.71828  0.0 0.0      2.71828
Base.cisMethod
cis(A::AbstractMatrix)

More efficient method forexp(im*A) of square matrixA (especially ifA isHermitian or real-Symmetric).

See alsocispi,sincos,exp.

Julia 1.7

Support for usingcis with matrices was added in Julia 1.7.

Examples

julia> cis([π 0; 0 π]) ≈ -Itrue
Base.:^Method
^(A::AbstractMatrix, p::Number)

Matrix power, equivalent to$\exp(p\log(A))$

Examples

julia> [1 2; 0 3]^32×2 Matrix{Int64}: 1  26 0  27
Base.:^Method
^(b::Number, A::AbstractMatrix)

Matrix exponential, equivalent to$\exp(\log(b)A)$.

Julia 1.1

Support for raisingIrrational numbers (like) to a matrix was added in Julia 1.1.

Examples

julia> 2^[1 2; 0 3]2×2 Matrix{Float64}: 2.0  6.0 0.0  8.0julia> ℯ^[1 2; 0 3]2×2 Matrix{Float64}: 2.71828  17.3673 0.0      20.0855
Base.logMethod
log(A::AbstractMatrix)

IfA has no negative real eigenvalue, compute the principal matrix logarithm ofA, i.e. the unique matrix$X$ such that$e^X = A$ and$-\pi < Im(\lambda) < \pi$ for all the eigenvalues$\lambda$ of$X$. IfA has nonpositive eigenvalues, a nonprincipal matrix function is returned whenever possible.

IfA is symmetric or Hermitian, its eigendecomposition (eigen) is used, ifA is triangular an improved version of the inverse scaling and squaring method is employed (see[AH12] and[AHR13]). IfA is real with no negative eigenvalues, then the real Schur form is computed. Otherwise, the complex Schur form is computed. Then the upper (quasi-)triangular algorithm in[AHR13] is used on the upper (quasi-)triangular factor.

Examples

julia> A = Matrix(2.7182818*I, 2, 2)2×2 Matrix{Float64}: 2.71828  0.0 0.0      2.71828julia> log(A)2×2 Matrix{Float64}: 1.0  0.0 0.0  1.0
Base.sqrtMethod
sqrt(x)

Return$\sqrt{x}$.

ThrowsDomainError for negativeReal arguments. Use complex negative arguments instead. Note thatsqrt has a branch cut along the negative real axis.

The prefix operator is equivalent tosqrt.

See also:hypot.

Examples

julia> sqrt(big(81))9.0julia> sqrt(big(-81))ERROR: DomainError with -81.0:NaN result for non-NaN input.Stacktrace: [1] sqrt(::BigFloat) at ./mpfr.jl:501[...]julia> sqrt(big(complex(-81)))0.0 + 9.0imjulia> sqrt(-81 - 0.0im)  # -0.0im is below the branch cut0.0 - 9.0imjulia> .√(1:4)4-element Vector{Float64}: 1.0 1.4142135623730951 1.7320508075688772 2.0
source
sqrt(A::AbstractMatrix)

IfA has no negative real eigenvalues, compute the principal matrix square root ofA, that is the unique matrix$X$ with eigenvalues having positive real part such that$X^2 = A$. Otherwise, a nonprincipal square root is returned.

IfA is real-symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the square root. For such matrices, eigenvalues λ that appear to be slightly negative due to roundoff errors are treated as if they were zero. More precisely, matrices with all eigenvalues≥ -rtol*(max |λ|) are treated as semidefinite (yielding a Hermitian square root), with negative eigenvalues taken to be zero.rtol is a keyword argument tosqrt (in the Hermitian/real-symmetric case only) that defaults to machine precision scaled bysize(A,1).

Otherwise, the square root is determined by means of the Björck-Hammarling method[BH83], which computes the complex Schur form (schur) and then the complex square root of the triangular factor. If a real square root exists, then an extension of this method[H87] that computes the real Schur form and then the real square root of the quasi-triangular factor is instead used.

Examples

julia> A = [4 0; 0 4]2×2 Matrix{Int64}: 4  0 0  4julia> sqrt(A)2×2 Matrix{Float64}: 2.0  0.0 0.0  2.0
Base.Math.cbrtMethod
cbrt(A::AbstractMatrix{<:Real})

Computes the real-valued cube root of a real-valued matrixA. IfT = cbrt(A), then we haveT*T*T ≈ A, see example given below.

IfA is symmetric, i.e., of typeHermOrSym{<:Real}, then (eigen) is used to find the cube root. Otherwise, a specialized version of the p-th root algorithm[S03] is utilized, which exploits the real-valued Schur decomposition (schur) to compute the cube root.

Examples

julia> A = [0.927524 -0.15857; -1.3677 -1.01172]2×2 Matrix{Float64}:  0.927524  -0.15857 -1.3677    -1.01172julia> T = cbrt(A)2×2 Matrix{Float64}:  0.910077  -0.151019 -1.30257   -0.936818julia> T*T*T ≈ Atrue
Base.cosMethod
cos(A::AbstractMatrix)

Compute the matrix cosine of a square matrixA.

IfA is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the cosine. Otherwise, the cosine is determined by callingexp.

Examples

julia> cos(fill(1.0, (2,2)))2×2 Matrix{Float64}:  0.291927  -0.708073 -0.708073   0.291927
Base.sinMethod
sin(A::AbstractMatrix)

Compute the matrix sine of a square matrixA.

IfA is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the sine. Otherwise, the sine is determined by callingexp.

Examples

julia> sin(fill(1.0, (2,2)))2×2 Matrix{Float64}: 0.454649  0.454649 0.454649  0.454649
Base.Math.sincosMethod
sincos(A::AbstractMatrix)

Compute the matrix sine and cosine of a square matrixA.

Examples

julia> S, C = sincos(fill(1.0, (2,2)));julia> S2×2 Matrix{Float64}: 0.454649  0.454649 0.454649  0.454649julia> C2×2 Matrix{Float64}:  0.291927  -0.708073 -0.708073   0.291927
Base.tanMethod
tan(A::AbstractMatrix)

Compute the matrix tangent of a square matrixA.

IfA is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the tangent. Otherwise, the tangent is determined by callingexp.

Examples

julia> tan(fill(1.0, (2,2)))2×2 Matrix{Float64}: -1.09252  -1.09252 -1.09252  -1.09252
Base.Math.secMethod
sec(A::AbstractMatrix)

Compute the matrix secant of a square matrixA.

Base.Math.cscMethod
csc(A::AbstractMatrix)

Compute the matrix cosecant of a square matrixA.

Base.Math.cotMethod
cot(A::AbstractMatrix)

Compute the matrix cotangent of a square matrixA.

Base.coshMethod
cosh(A::AbstractMatrix)

Compute the matrix hyperbolic cosine of a square matrixA.

Base.sinhMethod
sinh(A::AbstractMatrix)

Compute the matrix hyperbolic sine of a square matrixA.

Base.tanhMethod
tanh(A::AbstractMatrix)

Compute the matrix hyperbolic tangent of a square matrixA.

Base.Math.sechMethod
sech(A::AbstractMatrix)

Compute the matrix hyperbolic secant of square matrixA.

Base.Math.cschMethod
csch(A::AbstractMatrix)

Compute the matrix hyperbolic cosecant of square matrixA.

Base.Math.cothMethod
coth(A::AbstractMatrix)

Compute the matrix hyperbolic cotangent of square matrixA.

Base.acosMethod
acos(A::AbstractMatrix)

Compute the inverse matrix cosine of a square matrixA.

IfA is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the inverse cosine. Otherwise, the inverse cosine is determined by usinglog andsqrt. For the theory and logarithmic formulas used to compute this function, see[AH16_1].

Examples

julia> acos(cos([0.5 0.1; -0.2 0.3]))2×2 Matrix{ComplexF64}:  0.5-8.32667e-17im  0.1+0.0im -0.2+2.63678e-16im  0.3-3.46945e-16im
Base.asinMethod
asin(A::AbstractMatrix)

Compute the inverse matrix sine of a square matrixA.

IfA is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the inverse sine. Otherwise, the inverse sine is determined by usinglog andsqrt. For the theory and logarithmic formulas used to compute this function, see[AH16_2].

Examples

julia> asin(sin([0.5 0.1; -0.2 0.3]))2×2 Matrix{ComplexF64}:  0.5-4.16334e-17im  0.1-5.55112e-17im -0.2+9.71445e-17im  0.3-1.249e-16im
Base.atanMethod
atan(A::AbstractMatrix)

Compute the inverse matrix tangent of a square matrixA.

IfA is symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the inverse tangent. Otherwise, the inverse tangent is determined by usinglog. For the theory and logarithmic formulas used to compute this function, see[AH16_3].

Examples

julia> atan(tan([0.5 0.1; -0.2 0.3]))2×2 Matrix{ComplexF64}:  0.5+1.38778e-17im  0.1-2.77556e-17im -0.2+6.93889e-17im  0.3-4.16334e-17im
Base.Math.asecMethod
asec(A::AbstractMatrix)

Compute the inverse matrix secant ofA.

Base.Math.acscMethod
acsc(A::AbstractMatrix)

Compute the inverse matrix cosecant ofA.

Base.Math.acotMethod
acot(A::AbstractMatrix)

Compute the inverse matrix cotangent ofA.

Base.acoshMethod
acosh(A::AbstractMatrix)

Compute the inverse hyperbolic matrix cosine of a square matrixA. For the theory and logarithmic formulas used to compute this function, see[AH16_4].

Base.asinhMethod
asinh(A::AbstractMatrix)

Compute the inverse hyperbolic matrix sine of a square matrixA. For the theory and logarithmic formulas used to compute this function, see[AH16_5].

Base.atanhMethod
atanh(A::AbstractMatrix)

Compute the inverse hyperbolic matrix tangent of a square matrixA. For the theory and logarithmic formulas used to compute this function, see[AH16_6].

Base.Math.asechMethod
asech(A::AbstractMatrix)

Compute the inverse matrix hyperbolic secant ofA.

Base.Math.acschMethod
acsch(A::AbstractMatrix)

Compute the inverse matrix hyperbolic cosecant ofA.

Base.Math.acothMethod
acoth(A::AbstractMatrix)

Compute the inverse matrix hyperbolic cotangent ofA.

LinearAlgebra.lyapFunction
lyap(A, C)

Computes the solutionX to the continuous Lyapunov equationAX + XA' + C = 0, where no eigenvalue ofA has a zero real part and no two eigenvalues are negative complex conjugates of each other.

Examples

julia> A = [3. 4.; 5. 6]2×2 Matrix{Float64}: 3.0  4.0 5.0  6.0julia> B = [1. 1.; 1. 2.]2×2 Matrix{Float64}: 1.0  1.0 1.0  2.0julia> X = lyap(A, B)2×2 Matrix{Float64}:  0.5  -0.5 -0.5   0.25julia> A*X + X*A' ≈ -Btrue
LinearAlgebra.sylvesterFunction
sylvester(A, B, C)

Computes the solutionX to the Sylvester equationAX + XB + C = 0, whereA,B andC have compatible dimensions andA and-B have no eigenvalues with equal real part.

Examples

julia> A = [3. 4.; 5. 6]2×2 Matrix{Float64}: 3.0  4.0 5.0  6.0julia> B = [1. 1.; 1. 2.]2×2 Matrix{Float64}: 1.0  1.0 1.0  2.0julia> C = [1. 2.; -2. 1]2×2 Matrix{Float64}:  1.0  2.0 -2.0  1.0julia> X = sylvester(A, B, C)2×2 Matrix{Float64}: -4.46667   1.93333  3.73333  -1.8julia> A*X + X*B ≈ -Ctrue
LinearAlgebra.issuccessFunction
issuccess(F::Factorization)

Test that a factorization of a matrix succeeded.

Julia 1.6

issuccess(::CholeskyPivoted) requires Julia 1.6 or later.

Examples

julia> F = cholesky([1 0; 0 1]);julia> issuccess(F)true
issuccess(F::LU; allowsingular = false)

Test that the LU factorization of a matrix succeeded. By default a factorization that produces a valid but rank-deficient U factor is considered a failure. This can be changed by passingallowsingular = true.

Julia 1.11

Theallowsingular keyword argument was added in Julia 1.11.

Examples

julia> F = lu([1 2; 1 2], check = false);julia> issuccess(F)falsejulia> issuccess(F, allowsingular = true)true
LinearAlgebra.issymmetricFunction
issymmetric(A) -> Bool

Test whether a matrix is symmetric.

Examples

julia> a = [1 2; 2 -1]2×2 Matrix{Int64}: 1   2 2  -1julia> issymmetric(a)truejulia> b = [1 im; -im 1]2×2 Matrix{Complex{Int64}}: 1+0im  0+1im 0-1im  1+0imjulia> issymmetric(b)false
LinearAlgebra.isposdefFunction
isposdef(A) -> Bool

Test whether a matrix is positive definite (and Hermitian) by trying to perform a Cholesky factorization ofA.

See alsoisposdef!,cholesky.

Examples

julia> A = [1 2; 2 50]2×2 Matrix{Int64}: 1   2 2  50julia> isposdef(A)true
LinearAlgebra.isposdef!Function
isposdef!(A) -> Bool

Test whether a matrix is positive definite (and Hermitian) by trying to perform a Cholesky factorization ofA, overwritingA in the process. See alsoisposdef.

Examples

julia> A = [1. 2.; 2. 50.];julia> isposdef!(A)truejulia> A2×2 Matrix{Float64}: 1.0  2.0 2.0  6.78233
LinearAlgebra.istrilFunction
istril(A::AbstractMatrix, k::Integer = 0) -> Bool

Test whetherA is lower triangular starting from thekth superdiagonal.

Examples

julia> a = [1 2; 2 -1]2×2 Matrix{Int64}: 1   2 2  -1julia> istril(a)falsejulia> istril(a, 1)truejulia> c = [1 1 0; 1 1 1; 1 1 1]3×3 Matrix{Int64}: 1  1  0 1  1  1 1  1  1julia> istril(c)falsejulia> istril(c, 1)true
LinearAlgebra.istriuFunction
istriu(A::AbstractMatrix, k::Integer = 0) -> Bool

Test whetherA is upper triangular starting from thekth superdiagonal.

Examples

julia> a = [1 2; 2 -1]2×2 Matrix{Int64}: 1   2 2  -1julia> istriu(a)falsejulia> istriu(a, -1)truejulia> c = [1 1 1; 1 1 1; 0 1 1]3×3 Matrix{Int64}: 1  1  1 1  1  1 0  1  1julia> istriu(c)falsejulia> istriu(c, -1)true
LinearAlgebra.isdiagFunction
isdiag(A) -> Bool

Test whether a matrix is diagonal in the sense thatiszero(A[i,j]) is true unlessi == j. Note that it is not necessary forA to be square; if you would also like to check that, you need to check thatsize(A, 1) == size(A, 2).

Examples

julia> a = [1 2; 2 -1]2×2 Matrix{Int64}: 1   2 2  -1julia> isdiag(a)falsejulia> b = [im 0; 0 -im]2×2 Matrix{Complex{Int64}}: 0+1im  0+0im 0+0im  0-1imjulia> isdiag(b)truejulia> c = [1 0 0; 0 2 0]2×3 Matrix{Int64}: 1  0  0 0  2  0julia> isdiag(c)truejulia> d = [1 0 0; 0 2 3]2×3 Matrix{Int64}: 1  0  0 0  2  3julia> isdiag(d)false
LinearAlgebra.ishermitianFunction
ishermitian(A) -> Bool

Test whether a matrix is Hermitian.

Examples

julia> a = [1 2; 2 -1]2×2 Matrix{Int64}: 1   2 2  -1julia> ishermitian(a)truejulia> b = [1 im; -im 1]2×2 Matrix{Complex{Int64}}: 1+0im  0+1im 0-1im  1+0imjulia> ishermitian(b)true
Base.transposeFunction
transpose(A)

Lazy transpose. Mutating the returned object should appropriately mutateA. Often, but not always, yieldsTranspose(A), whereTranspose is a lazy transpose wrapper. Note that this operation is recursive.

This operation is intended for linear algebra usage - for general data manipulation seepermutedims, which is non-recursive.

Examples

julia> A = [3 2; 0 0]2×2 Matrix{Int64}: 3  2 0  0julia> B = transpose(A)2×2 transpose(::Matrix{Int64}) with eltype Int64: 3  0 2  0julia> B isa Transposetruejulia> transpose(B) === A # the transpose of a transpose unwraps the parenttruejulia> Transpose(B) # however, the constructor always wraps its argument2×2 transpose(transpose(::Matrix{Int64})) with eltype Int64: 3  2 0  0julia> B[1,2] = 4; # modifying B will modify A automaticallyjulia> A2×2 Matrix{Int64}: 3  2 4  0

For complex matrices, theadjoint operation is equivalent to a conjugate-transpose.

julia> A = reshape([Complex(x, x) for x in 1:4], 2, 2)2×2 Matrix{Complex{Int64}}: 1+1im  3+3im 2+2im  4+4imjulia> adjoint(A) == conj(transpose(A))true

Thetranspose of anAbstractVector is a row-vector:

julia> v = [1,2,3]3-element Vector{Int64}: 1 2 3julia> transpose(v) # returns a row-vector1×3 transpose(::Vector{Int64}) with eltype Int64: 1  2  3julia> transpose(v) * v # compute the dot product14

For a matrix of matrices, the individual blocks are recursively operated on:

julia> C = [1 3; 2 4]2×2 Matrix{Int64}: 1  3 2  4julia> D = reshape([C, 2C, 3C, 4C], 2, 2) # construct a block matrix2×2 Matrix{Matrix{Int64}}: [1 3; 2 4]  [3 9; 6 12] [2 6; 4 8]  [4 12; 8 16]julia> transpose(D) # blocks are recursively transposed2×2 transpose(::Matrix{Matrix{Int64}}) with eltype Transpose{Int64, Matrix{Int64}}: [1 2; 3 4]   [2 4; 6 8] [3 6; 9 12]  [4 8; 12 16]
transpose(F::Factorization)

Lazy transpose of the factorizationF. By default, returns aTransposeFactorization, except forFactorizations with realeltype, in which case returns anAdjointFactorization.

LinearAlgebra.transpose!Function
transpose!(X::AbstractSparseMatrixCSC{Tv,Ti}, A::AbstractSparseMatrixCSC{Tv,Ti}) where {Tv,Ti}

Transpose the matrixA and stores it in the matrixX.size(X) must be equal tosize(transpose(A)). No additional memory is allocated other than resizing the rowval and nzval ofX, if needed.

Seehalfperm!

transpose!(dest,src)

Transpose arraysrc and store the result in the preallocated arraydest, which should have a size corresponding to(size(src,2),size(src,1)). No in-place transposition is supported and unexpected results will happen ifsrc anddest have overlapping memory regions.

Examples

julia> A = [3+2im 9+2im; 8+7im  4+6im]2×2 Matrix{Complex{Int64}}: 3+2im  9+2im 8+7im  4+6imjulia> B = zeros(Complex{Int64}, 2, 2)2×2 Matrix{Complex{Int64}}: 0+0im  0+0im 0+0im  0+0imjulia> transpose!(B, A);julia> B2×2 Matrix{Complex{Int64}}: 3+2im  8+7im 9+2im  4+6imjulia> A2×2 Matrix{Complex{Int64}}: 3+2im  9+2im 8+7im  4+6im
LinearAlgebra.TransposeType
Transpose

Lazy wrapper type for a transpose view of the underlying linear algebra object, usually anAbstractVector/AbstractMatrix. Usually, theTranspose constructor should not be called directly, usetranspose instead. To materialize the view usecopy.

This type is intended for linear algebra usage - for general data manipulation seepermutedims.

Examples

julia> A = [2 3; 0 0]2×2 Matrix{Int64}: 2  3 0  0julia> Transpose(A)2×2 transpose(::Matrix{Int64}) with eltype Int64: 2  0 3  0
LinearAlgebra.TransposeFactorizationType
TransposeFactorization

Lazy wrapper type for the transpose of the underlyingFactorization object. Usually, theTransposeFactorization constructor should not be called directly, usetranspose(:: Factorization) instead.

Base.adjointFunction
A'adjoint(A)

Lazy adjoint (conjugate transposition). Note thatadjoint is applied recursively to elements.

For number types,adjoint returns the complex conjugate, and therefore it is equivalent to the identity function for real numbers.

This operation is intended for linear algebra usage - for general data manipulation seepermutedims.

Examples

julia> A = [3+2im 9+2im; 0  0]2×2 Matrix{Complex{Int64}}: 3+2im  9+2im 0+0im  0+0imjulia> B = A' # equivalently adjoint(A)2×2 adjoint(::Matrix{Complex{Int64}}) with eltype Complex{Int64}: 3-2im  0+0im 9-2im  0+0imjulia> B isa Adjointtruejulia> adjoint(B) === A # the adjoint of an adjoint unwraps the parenttruejulia> Adjoint(B) # however, the constructor always wraps its argument2×2 adjoint(adjoint(::Matrix{Complex{Int64}})) with eltype Complex{Int64}: 3+2im  9+2im 0+0im  0+0imjulia> B[1,2] = 4 + 5im; # modifying B will modify A automaticallyjulia> A2×2 Matrix{Complex{Int64}}: 3+2im  9+2im 4-5im  0+0im

For real matrices, theadjoint operation is equivalent to atranspose.

julia> A = reshape([x for x in 1:4], 2, 2)2×2 Matrix{Int64}: 1  3 2  4julia> A'2×2 adjoint(::Matrix{Int64}) with eltype Int64: 1  2 3  4julia> adjoint(A) == transpose(A)true

The adjoint of anAbstractVector is a row-vector:

julia> x = [3, 4im]2-element Vector{Complex{Int64}}: 3 + 0im 0 + 4imjulia> x'1×2 adjoint(::Vector{Complex{Int64}}) with eltype Complex{Int64}: 3+0im  0-4imjulia> x'x # compute the dot product, equivalently x' * x25 + 0im

For a matrix of matrices, the individual blocks are recursively operated on:

julia> A = reshape([x + im*x for x in 1:4], 2, 2)2×2 Matrix{Complex{Int64}}: 1+1im  3+3im 2+2im  4+4imjulia> C = reshape([A, 2A, 3A, 4A], 2, 2)2×2 Matrix{Matrix{Complex{Int64}}}: [1+1im 3+3im; 2+2im 4+4im]  [3+3im 9+9im; 6+6im 12+12im] [2+2im 6+6im; 4+4im 8+8im]  [4+4im 12+12im; 8+8im 16+16im]julia> C'2×2 adjoint(::Matrix{Matrix{Complex{Int64}}}) with eltype Adjoint{Complex{Int64}, Matrix{Complex{Int64}}}: [1-1im 2-2im; 3-3im 4-4im]    [2-2im 4-4im; 6-6im 8-8im] [3-3im 6-6im; 9-9im 12-12im]  [4-4im 8-8im; 12-12im 16-16im]
adjoint(F::Factorization)

Lazy adjoint of the factorizationF. By default, returns anAdjointFactorization wrapper.

LinearAlgebra.adjoint!Function
adjoint!(X::AbstractSparseMatrixCSC{Tv,Ti}, A::AbstractSparseMatrixCSC{Tv,Ti}) where {Tv,Ti}

Transpose the matrixA and stores the adjoint of the elements in the matrixX.size(X) must be equal tosize(transpose(A)). No additional memory is allocated other than resizing the rowval and nzval ofX, if needed.

Seehalfperm!

adjoint!(dest,src)

Conjugate transpose arraysrc and store the result in the preallocated arraydest, which should have a size corresponding to(size(src,2),size(src,1)). No in-place transposition is supported and unexpected results will happen ifsrc anddest have overlapping memory regions.

Examples

julia> A = [3+2im 9+2im; 8+7im  4+6im]2×2 Matrix{Complex{Int64}}: 3+2im  9+2im 8+7im  4+6imjulia> B = zeros(Complex{Int64}, 2, 2)2×2 Matrix{Complex{Int64}}: 0+0im  0+0im 0+0im  0+0imjulia> adjoint!(B, A);julia> B2×2 Matrix{Complex{Int64}}: 3-2im  8-7im 9-2im  4-6imjulia> A2×2 Matrix{Complex{Int64}}: 3+2im  9+2im 8+7im  4+6im
LinearAlgebra.AdjointType
Adjoint

Lazy wrapper type for an adjoint view of the underlying linear algebra object, usually anAbstractVector/AbstractMatrix. Usually, theAdjoint constructor should not be called directly, useadjoint instead. To materialize the view usecopy.

This type is intended for linear algebra usage - for general data manipulation seepermutedims.

Examples

julia> A = [3+2im 9+2im; 0 0]2×2 Matrix{Complex{Int64}}: 3+2im  9+2im 0+0im  0+0imjulia> Adjoint(A)2×2 adjoint(::Matrix{Complex{Int64}}) with eltype Complex{Int64}: 3-2im  0+0im 9-2im  0+0im
LinearAlgebra.AdjointFactorizationType
AdjointFactorization

Lazy wrapper type for the adjoint of the underlyingFactorization object. Usually, theAdjointFactorization constructor should not be called directly, useadjoint(:: Factorization) instead.

Base.copyMethod
copy(A::Transpose)copy(A::Adjoint)

Eagerly evaluate the lazy matrix transpose/adjoint. Note that the transposition is applied recursively to elements.

This operation is intended for linear algebra usage - for general data manipulation seepermutedims, which is non-recursive.

Examples

julia> A = [1 2im; -3im 4]2×2 Matrix{Complex{Int64}}: 1+0im  0+2im 0-3im  4+0imjulia> T = transpose(A)2×2 transpose(::Matrix{Complex{Int64}}) with eltype Complex{Int64}: 1+0im  0-3im 0+2im  4+0imjulia> copy(T)2×2 Matrix{Complex{Int64}}: 1+0im  0-3im 0+2im  4+0im
LinearAlgebra.stride1Function
stride1(A) -> Int

Return the distance between successive array elements in dimension 1 in units of element size.

Examples

julia> A = [1,2,3,4]4-element Vector{Int64}: 1 2 3 4julia> LinearAlgebra.stride1(A)1julia> B = view(A, 2:2:4)2-element view(::Vector{Int64}, 2:2:4) with eltype Int64: 2 4julia> LinearAlgebra.stride1(B)2
LinearAlgebra.checksquareFunction
LinearAlgebra.checksquare(A)

Check that a matrix is square, then return its common dimension. For multiple arguments, return a vector.

Examples

julia> A = fill(1, (4,4)); B = fill(1, (5,5));julia> LinearAlgebra.checksquare(A, B)2-element Vector{Int64}: 4 5
LinearAlgebra.peakflopsFunction
LinearAlgebra.peakflops(n::Integer=4096; eltype::DataType=Float64, ntrials::Integer=3, parallel::Bool=false)

peakflops computes the peak flop rate of the computer by using double precisiongemm!. By default, if no arguments are specified, it multiplies twoFloat64 matrices of sizen x n, wheren = 4096. If the underlying BLAS is using multiple threads, higher flop rates are realized. The number of BLAS threads can be set withBLAS.set_num_threads(n).

If the keyword argumenteltype is provided,peakflops will construct matrices with elements of typeeltype for calculating the peak flop rate.

By default,peakflops will use the best timing from 3 trials. If thentrials keyword argument is provided,peakflops will use those many trials for picking the best timing.

If the keyword argumentparallel is set totrue,peakflops is run in parallel on all the worker processors. The flop rate of the entire parallel computer is returned. When running in parallel, only 1 BLAS thread is used. The argumentn still refers to the size of the problem that is solved on each processor.

Julia 1.1

This function requires at least Julia 1.1. In Julia 1.0 it is available from the standard libraryInteractiveUtils.

LinearAlgebra.hermitianpartFunction
hermitianpart(A::AbstractMatrix, uplo::Symbol=:U) -> Hermitian

Return the Hermitian part of the square matrixA, defined as(A + A') / 2, as aHermitian matrix. For real matricesA, this is also known as the symmetric part ofA; it is also sometimes called the "operator real part". The optional argumentuplo controls the corresponding argument of theHermitian view. For real matrices, the latter is equivalent to aSymmetric view.

See alsohermitianpart! for the corresponding in-place operation.

Julia 1.10

This function requires Julia 1.10 or later.

LinearAlgebra.hermitianpart!Function
hermitianpart!(A::AbstractMatrix, uplo::Symbol=:U) -> Hermitian

Overwrite the square matrixA in-place with its Hermitian part(A + A') / 2, and returnHermitian(A, uplo). For real matricesA, this is also known as the symmetric part ofA.

See alsohermitianpart for the corresponding out-of-place operation.

Julia 1.10

This function requires Julia 1.10 or later.

LinearAlgebra.copy_adjoint!Function
copy_adjoint!(B::AbstractVecOrMat, ir_dest::AbstractRange{Int}, jr_dest::AbstractRange{Int},                A::AbstractVecOrMat, ir_src::AbstractRange{Int}, jr_src::AbstractRange{Int}) -> B

Efficiently copy elements of matrixA toB with adjunction as follows:

B[ir_dest, jr_dest] = adjoint(A)[jr_src, ir_src]

The elementsB[ir_dest, jr_dest] are overwritten. Furthermore, the index range parameters must satisfylength(ir_dest) == length(jr_src) andlength(jr_dest) == length(ir_src).

LinearAlgebra.copy_transpose!Function
copy_transpose!(B::AbstractVecOrMat, ir_dest::AbstractRange{Int}, jr_dest::AbstractRange{Int},                A::AbstractVecOrMat, ir_src::AbstractRange{Int}, jr_src::AbstractRange{Int}) -> B

Efficiently copy elements of matrixA toB with transposition as follows:

B[ir_dest, jr_dest] = transpose(A)[jr_src, ir_src]

The elementsB[ir_dest, jr_dest] are overwritten. Furthermore, the index range parameters must satisfylength(ir_dest) == length(jr_src) andlength(jr_dest) == length(ir_src).

copy_transpose!(B::AbstractMatrix, ir_dest::AbstractUnitRange, jr_dest::AbstractUnitRange,                tM::AbstractChar,                M::AbstractVecOrMat, ir_src::AbstractUnitRange, jr_src::AbstractUnitRange) -> B

Efficiently copy elements of matrixM toB conditioned on the character parametertM as follows:

tMDestinationSource
'N'B[ir_dest, jr_dest]transpose(M)[jr_src, ir_src]
'T'B[ir_dest, jr_dest]M[jr_src, ir_src]
'C'B[ir_dest, jr_dest]conj(M)[jr_src, ir_src]

The elementsB[ir_dest, jr_dest] are overwritten. Furthermore, the index range parameters must satisfylength(ir_dest) == length(jr_src) andlength(jr_dest) == length(ir_src).

See alsocopyto! andcopy_adjoint!.

Low-level matrix operations

In many cases there are in-place versions of matrix operations that allow you to supply a pre-allocated output vector or matrix. This is useful when optimizing critical code in order to avoid the overhead of repeated allocations. These in-place operations are suffixed with! below (e.g.mul!) according to the usual Julia convention.

LinearAlgebra.mul!Function
mul!(Y, A, B) -> Y

Calculates the matrix-matrix or matrix-vector product$A B$ and stores the result inY, overwriting the existing value ofY. Note thatY must not be aliased with eitherA orB.

Examples

julia> A = [1.0 2.0; 3.0 4.0]; B = [1.0 1.0; 1.0 1.0]; Y = similar(B);julia> mul!(Y, A, B) === Ytruejulia> Y2×2 Matrix{Float64}: 3.0  3.0 7.0  7.0julia> Y == A * Btrue

Implementation

For custom matrix and vector types, it is recommended to implement 5-argumentmul! rather than implementing 3-argumentmul! directly if possible.

mul!(C, A, B, α, β) -> C

Combined inplace matrix-matrix or matrix-vector multiply-add$A B α + C β$. The result is stored inC by overwriting it. Note thatC must not be aliased with eitherA orB.

Julia 1.3

Five-argumentmul! requires at least Julia 1.3.

Examples

julia> A = [1.0 2.0; 3.0 4.0]; B = [1.0 1.0; 1.0 1.0]; C = [1.0 2.0; 3.0 4.0];julia> α, β = 100.0, 10.0;julia> mul!(C, A, B, α, β) === Ctruejulia> C2×2 Matrix{Float64}: 310.0  320.0 730.0  740.0julia> C_original = [1.0 2.0; 3.0 4.0]; # A copy of the original value of Cjulia> C == A * B * α + C_original * βtrue
LinearAlgebra.lmul!Function
lmul!(a::Number, B::AbstractArray)

Scale an arrayB by a scalara overwritingB in-place. Usermul! to multiply scalar from right. The scaling operation respects the semantics of the multiplication* betweena and an element ofB. In particular, this also applies to multiplication involving non-finite numbers such asNaN and±Inf.

Julia 1.1

Prior to Julia 1.1,NaN and±Inf entries inB were treated inconsistently.

Examples

julia> B = [1 2; 3 4]2×2 Matrix{Int64}: 1  2 3  4julia> lmul!(2, B)2×2 Matrix{Int64}: 2  4 6  8julia> lmul!(0.0, [Inf])1-element Vector{Float64}: NaN
lmul!(A, B)

Calculate the matrix-matrix product$AB$, overwritingB, and return the result. Here,A must be of special matrix type, like, e.g.,Diagonal,UpperTriangular orLowerTriangular, or of some orthogonal type, seeQR.

Examples

julia> B = [0 1; 1 0];julia> A = UpperTriangular([1 2; 0 3]);julia> lmul!(A, B);julia> B2×2 Matrix{Int64}: 2  1 3  0julia> B = [1.0 2.0; 3.0 4.0];julia> F = qr([0 1; -1 0]);julia> lmul!(F.Q, B)2×2 Matrix{Float64}: 3.0  4.0 1.0  2.0
LinearAlgebra.rmul!Function
rmul!(A::AbstractArray, b::Number)

Scale an arrayA by a scalarb overwritingA in-place. Uselmul! to multiply scalar from left. The scaling operation respects the semantics of the multiplication* between an element ofA andb. In particular, this also applies to multiplication involving non-finite numbers such asNaN and±Inf.

Julia 1.1

Prior to Julia 1.1,NaN and±Inf entries inA were treated inconsistently.

Examples

julia> A = [1 2; 3 4]2×2 Matrix{Int64}: 1  2 3  4julia> rmul!(A, 2)2×2 Matrix{Int64}: 2  4 6  8julia> rmul!([NaN], 0.0)1-element Vector{Float64}: NaN
rmul!(A, B)

Calculate the matrix-matrix product$AB$, overwritingA, and return the result. Here,B must be of special matrix type, like, e.g.,Diagonal,UpperTriangular orLowerTriangular, or of some orthogonal type, seeQR.

Examples

julia> A = [0 1; 1 0];julia> B = UpperTriangular([1 2; 0 3]);julia> rmul!(A, B);julia> A2×2 Matrix{Int64}: 0  3 1  2julia> A = [1.0 2.0; 3.0 4.0];julia> F = qr([0 1; -1 0]);julia> rmul!(A, F.Q)2×2 Matrix{Float64}: 2.0  1.0 4.0  3.0
LinearAlgebra.ldiv!Function
ldiv!(Y, A, B) -> Y

ComputeA \ B in-place and store the result inY, returning the result.

The argumentA shouldnot be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced byfactorize orcholesky). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g.,lu!), and performance-critical situations requiringldiv! usually also require fine-grained control over the factorization ofA.

Note

Certain structured matrix types, such asDiagonal andUpperTriangular, are permitted, as these are already in a factorized form

Examples

julia> A = [1 2.2 4; 3.1 0.2 3; 4 1 2];julia> X = [1; 2.5; 3];julia> Y = zero(X);julia> ldiv!(Y, qr(A), X);julia> Y ≈ A\Xtrue
ldiv!(A, B)

ComputeA \ B in-place and overwritingB to store the result.

The argumentA shouldnot be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced byfactorize orcholesky). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g.,lu!), and performance-critical situations requiringldiv! usually also require fine-grained control over the factorization ofA.

Note

Certain structured matrix types, such asDiagonal andUpperTriangular, are permitted, as these are already in a factorized form

Examples

julia> A = [1 2.2 4; 3.1 0.2 3; 4 1 2];julia> X = [1; 2.5; 3];julia> Y = copy(X);julia> ldiv!(qr(A), X);julia> X ≈ A\Ytrue
ldiv!(a::Number, B::AbstractArray)

Divide each entry in an arrayB by a scalara overwritingB in-place. Userdiv! to divide scalar from right.

Examples

julia> B = [1.0 2.0; 3.0 4.0]2×2 Matrix{Float64}: 1.0  2.0 3.0  4.0julia> ldiv!(2.0, B)2×2 Matrix{Float64}: 0.5  1.0 1.5  2.0
ldiv!(A::Tridiagonal, B::AbstractVecOrMat) -> B

ComputeA \ B in-place by Gaussian elimination with partial pivoting and store the result inB, returning the result. In the process, the diagonals ofA are overwritten as well.

Julia 1.11

ldiv! forTridiagonal left-hand sides requires at least Julia 1.11.

LinearAlgebra.rdiv!Function
rdiv!(A, B)

ComputeA / B in-place and overwritingA to store the result.

The argumentB shouldnot be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced byfactorize orcholesky). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g.,lu!), and performance-critical situations requiringrdiv! usually also require fine-grained control over the factorization ofB.

Note

Certain structured matrix types, such asDiagonal andUpperTriangular, are permitted, as these are already in a factorized form

rdiv!(A::AbstractArray, b::Number)

Divide each entry in an arrayA by a scalarb overwritingA in-place. Useldiv! to divide scalar from left.

Examples

julia> A = [1.0 2.0; 3.0 4.0]2×2 Matrix{Float64}: 1.0  2.0 3.0  4.0julia> rdiv!(A, 2.0)2×2 Matrix{Float64}: 0.5  1.0 1.5  2.0

BLAS functions

In Julia (as in much of scientific computation), dense linear-algebra operations are based on theLAPACK library, which in turn is built on top of basic linear-algebra building-blocks known as theBLAS. There are highly optimized implementations of BLAS available for every computer architecture, and sometimes in high-performance linear algebra routines it is useful to call the BLAS functions directly.

LinearAlgebra.BLAS provides wrappers for some of the BLAS functions. Those BLAS functions that overwrite one of the input arrays have names ending in'!'. Usually, a BLAS function has four methods defined, forFloat32,Float64,ComplexF32, andComplexF64 arrays.

BLAS character arguments

Many BLAS functions accept arguments that determine whether to transpose an argument (trans), which triangle of a matrix to reference (uplo orul), whether the diagonal of a triangular matrix can be assumed to be all ones (dA) or which side of a matrix multiplication the input argument belongs on (side). The possibilities are:

Multiplication order

sideMeaning
'L'The argument goes on theleft side of a matrix-matrix operation.
'R'The argument goes on theright side of a matrix-matrix operation.

Triangle referencing

uplo/ulMeaning
'U'Only theupper triangle of the matrix will be used.
'L'Only thelower triangle of the matrix will be used.

Transposition operation

trans/tXMeaning
'N'The input matrixX is not transposed or conjugated.
'T'The input matrixX will be transposed.
'C'The input matrixX will be conjugated and transposed.

Unit diagonal

diag/dXMeaning
'N'The diagonal values of the matrixX will be read.
'U'The diagonal of the matrixX is assumed to be all ones.
LinearAlgebra.BLASModule

Interface to BLAS subroutines.

LinearAlgebra.BLAS.set_num_threadsFunction
set_num_threads(n::Integer)set_num_threads(::Nothing)

Set the number of threads the BLAS library should use equal ton::Integer.

Also acceptsnothing, in which case julia tries to guess the default number of threads. Passingnothing is discouraged and mainly exists for historical reasons.

LinearAlgebra.BLAS.get_num_threadsFunction
get_num_threads()

Get the number of threads the BLAS library is using.

Julia 1.6

get_num_threads requires at least Julia 1.6.

BLAS functions can be divided into three groups, also called three levels, depending on when they were first proposed, the type of input parameters, and the complexity of the operation.

Level 1 BLAS functions

The level 1 BLAS functions were first proposed in [(Lawson, 1979)][Lawson-1979] and define operations between scalars and vectors.

[Lawson-1979]: https://dl.acm.org/doi/10.1145/355841.355847

LinearAlgebra.BLAS.rot!Function
rot!(n, X, incx, Y, incy, c, s)

OverwriteX withc*X + s*Y andY with-conj(s)*X + c*Y for the firstn elements of arrayX with strideincx and firstn elements of arrayY with strideincy. ReturnsX andY.

Julia 1.5

rot! requires at least Julia 1.5.

LinearAlgebra.BLAS.scal!Function
scal!(n, a, X, incx)scal!(a, X)

OverwriteX witha*X for the firstn elements of arrayX with strideincx. ReturnsX.

Ifn andincx are not provided,length(X) andstride(X,1) are used.

LinearAlgebra.BLAS.scalFunction
scal(n, a, X, incx)scal(a, X)

ReturnX scaled bya for the firstn elements of arrayX with strideincx.

Ifn andincx are not provided,length(X) andstride(X,1) are used.

LinearAlgebra.BLAS.blascopy!Function
blascopy!(n, X, incx, Y, incy)

Copyn elements of arrayX with strideincx to arrayY with strideincy. ReturnsY.

LinearAlgebra.BLAS.dotFunction
dot(n, X, incx, Y, incy)

Dot product of two vectors consisting ofn elements of arrayX with strideincx andn elements of arrayY with strideincy.

Examples

julia> BLAS.dot(10, fill(1.0, 10), 1, fill(1.0, 20), 2)10.0
LinearAlgebra.BLAS.dotuFunction
dotu(n, X, incx, Y, incy)

Dot function for two complex vectors consisting ofn elements of arrayX with strideincx andn elements of arrayY with strideincy.

Examples

julia> BLAS.dotu(10, fill(1.0im, 10), 1, fill(1.0+im, 20), 2)-10.0 + 10.0im
LinearAlgebra.BLAS.dotcFunction
dotc(n, X, incx, U, incy)

Dot function for two complex vectors, consisting ofn elements of arrayX with strideincx andn elements of arrayU with strideincy, conjugating the first vector.

Examples

julia> BLAS.dotc(10, fill(1.0im, 10), 1, fill(1.0+im, 20), 2)10.0 - 10.0im
LinearAlgebra.BLAS.nrm2Function
nrm2(n, X, incx)

2-norm of a vector consisting ofn elements of arrayX with strideincx.

Examples

julia> BLAS.nrm2(4, fill(1.0, 8), 2)2.0julia> BLAS.nrm2(1, fill(1.0, 8), 2)1.0
LinearAlgebra.BLAS.asumFunction
asum(n, X, incx)

Sum of the magnitudes of the firstn elements of arrayX with strideincx.

For a real array, the magnitude is the absolute value. For a complex array, the magnitude is the sum of the absolute value of the real part and the absolute value of the imaginary part.

Examples

julia> BLAS.asum(5, fill(1.0im, 10), 2)5.0julia> BLAS.asum(2, fill(1.0im, 10), 5)2.0
LinearAlgebra.BLAS.iamaxFunction
iamax(n, dx, incx)iamax(dx)

Find the index of the element ofdx with the maximum absolute value.n is the length ofdx, andincx is the stride. Ifn andincx are not provided, they assume default values ofn=length(dx) andincx=stride1(dx).

Level 2 BLAS functions

The level 2 BLAS functions were published in [(Dongarra, 1988)][Dongarra-1988], and define matrix-vector operations.

[Dongarra-1988]: https://dl.acm.org/doi/10.1145/42288.42291

return a vector

LinearAlgebra.BLAS.gemv!Function
gemv!(tA, alpha, A, x, beta, y)

Update the vectory asalpha*A*x + beta*y oralpha*A'x + beta*y according totA.alpha andbeta are scalars. Return the updatedy.

LinearAlgebra.BLAS.gemvMethod
gemv(tA, alpha, A, x)

Returnalpha*A*x oralpha*A'x according totA.alpha is a scalar.

LinearAlgebra.BLAS.gemvMethod
gemv(tA, A, x)

ReturnA*x orA'x according totA.

LinearAlgebra.BLAS.gbmv!Function
gbmv!(trans, m, kl, ku, alpha, A, x, beta, y)

Update vectory asalpha*A*x + beta*y oralpha*A'*x + beta*y according totrans. The matrixA is a general band matrix of dimensionm bysize(A,2) withkl sub-diagonals andku super-diagonals.alpha andbeta are scalars. Return the updatedy.

LinearAlgebra.BLAS.gbmvFunction
gbmv(trans, m, kl, ku, alpha, A, x)

Returnalpha*A*x oralpha*A'*x according totrans. The matrixA is a general band matrix of dimensionm bysize(A,2) withkl sub-diagonals andku super-diagonals, andalpha is a scalar.

LinearAlgebra.BLAS.hemv!Function
hemv!(ul, alpha, A, x, beta, y)

Update the vectory asalpha*A*x + beta*y.A is assumed to be Hermitian. Only theul triangle ofA is used.alpha andbeta are scalars. Return the updatedy.

LinearAlgebra.BLAS.hemvMethod
hemv(ul, alpha, A, x)

Returnalpha*A*x.A is assumed to be Hermitian. Only theul triangle ofA is used.alpha is a scalar.

LinearAlgebra.BLAS.hemvMethod
hemv(ul, A, x)

ReturnA*x.A is assumed to be Hermitian. Only theul triangle ofA is used.

LinearAlgebra.BLAS.hpmv!Function
hpmv!(uplo, α, AP, x, β, y)

Update vectory asα*A*x + β*y, whereA is a Hermitian matrix provided in packed formatAP.

Withuplo = 'U', the array AP must contain the upper triangular part of the Hermitian matrix packed sequentially, column by column, so thatAP[1] containsA[1, 1],AP[2] andAP[3] containA[1, 2] andA[2, 2] respectively, and so on.

Withuplo = 'L', the array AP must contain the lower triangular part of the Hermitian matrix packed sequentially, column by column, so thatAP[1] containsA[1, 1],AP[2] andAP[3] containA[2, 1] andA[3, 1] respectively, and so on.

The scalar inputsα andβ must be complex or real numbers.

The array inputsx,y andAP must all be ofComplexF32 orComplexF64 type.

Return the updatedy.

Julia 1.5

hpmv! requires at least Julia 1.5.

LinearAlgebra.BLAS.symv!Function
symv!(ul, alpha, A, x, beta, y)

Update the vectory asalpha*A*x + beta*y.A is assumed to be symmetric. Only theul triangle ofA is used.alpha andbeta are scalars. Return the updatedy.

LinearAlgebra.BLAS.symvMethod
symv(ul, alpha, A, x)

Returnalpha*A*x.A is assumed to be symmetric. Only theul triangle ofA is used.alpha is a scalar.

LinearAlgebra.BLAS.symvMethod
symv(ul, A, x)

ReturnA*x.A is assumed to be symmetric. Only theul triangle ofA is used.

LinearAlgebra.BLAS.sbmv!Function
sbmv!(uplo, k, alpha, A, x, beta, y)

Update vectory asalpha*A*x + beta*y whereA is a symmetric band matrix of ordersize(A,2) withk super-diagonals stored in the argumentA. The storage layout forA is described the reference BLAS module, level-2 BLAS athttps://www.netlib.org/lapack/explore-html/. Only theuplo triangle ofA is used.

Return the updatedy.

LinearAlgebra.BLAS.sbmvMethod
sbmv(uplo, k, alpha, A, x)

Returnalpha*A*x whereA is a symmetric band matrix of ordersize(A,2) withk super-diagonals stored in the argumentA. Only theuplo triangle ofA is used.

LinearAlgebra.BLAS.sbmvMethod
sbmv(uplo, k, A, x)

ReturnA*x whereA is a symmetric band matrix of ordersize(A,2) withk super-diagonals stored in the argumentA. Only theuplo triangle ofA is used.

LinearAlgebra.BLAS.spmv!Function
spmv!(uplo, α, AP, x, β, y)

Update vectory asα*A*x + β*y, whereA is a symmetric matrix provided in packed formatAP.

Withuplo = 'U', the array AP must contain the upper triangular part of the symmetric matrix packed sequentially, column by column, so thatAP[1] containsA[1, 1],AP[2] andAP[3] containA[1, 2] andA[2, 2] respectively, and so on.

Withuplo = 'L', the array AP must contain the lower triangular part of the symmetric matrix packed sequentially, column by column, so thatAP[1] containsA[1, 1],AP[2] andAP[3] containA[2, 1] andA[3, 1] respectively, and so on.

The scalar inputsα andβ must be real.

The array inputsx,y andAP must all be ofFloat32 orFloat64 type.

Return the updatedy.

Julia 1.5

spmv! requires at least Julia 1.5.

LinearAlgebra.BLAS.trmv!Function
trmv!(ul, tA, dA, A, b)

Returnop(A)*b, whereop is determined bytA. Only theul triangle ofA is used.dA determines if the diagonal values are read or are assumed to be all ones. The multiplication occurs in-place onb.

LinearAlgebra.BLAS.trmvFunction
trmv(ul, tA, dA, A, b)

Returnop(A)*b, whereop is determined bytA. Only theul triangle ofA is used.dA determines if the diagonal values are read or are assumed to be all ones.

LinearAlgebra.BLAS.trsv!Function
trsv!(ul, tA, dA, A, b)

Overwriteb with the solution toA*x = b or one of the other two variants determined bytA andul.dA determines if the diagonal values are read or are assumed to be all ones. Return the updatedb.

LinearAlgebra.BLAS.trsvFunction
trsv(ul, tA, dA, A, b)

Return the solution toA*x = b or one of the other two variants determined bytA andul.dA determines if the diagonal values are read or are assumed to be all ones.

return a matrix

LinearAlgebra.BLAS.ger!Function
ger!(alpha, x, y, A)

Rank-1 update of the matrixA with vectorsx andy asalpha*x*y' + A.

LinearAlgebra.BLAS.her!Function
her!(uplo, alpha, x, A)

Methods for complex arrays only. Rank-1 update of the Hermitian matrixA with vectorx asalpha*x*x' + A.uplo controls which triangle ofA is updated. ReturnsA.

LinearAlgebra.BLAS.syr!Function
syr!(uplo, alpha, x, A)

Rank-1 update of the symmetric matrixA with vectorx asalpha*x*transpose(x) + A.uplo controls which triangle ofA is updated. ReturnsA.

LinearAlgebra.BLAS.spr!Function
spr!(uplo, α, x, AP)

Update matrixA asA+α*x*x', whereA is a symmetric matrix provided in packed formatAP andx is a vector.

Withuplo = 'U', the array AP must contain the upper triangular part of the symmetric matrix packed sequentially, column by column, so thatAP[1] containsA[1, 1],AP[2] andAP[3] containA[1, 2] andA[2, 2] respectively, and so on.

Withuplo = 'L', the array AP must contain the lower triangular part of the symmetric matrix packed sequentially, column by column, so thatAP[1] containsA[1, 1],AP[2] andAP[3] containA[2, 1] andA[3, 1] respectively, and so on.

The scalar inputα must be real.

The array inputsx andAP must all be ofFloat32 orFloat64 type. Return the updatedAP.

Julia 1.8

spr! requires at least Julia 1.8.

Level 3 BLAS functions

The level 3 BLAS functions were published in [(Dongarra, 1990)][Dongarra-1990], and define matrix-matrix operations.

[Dongarra-1990]: https://dl.acm.org/doi/10.1145/77626.79170

LinearAlgebra.BLAS.gemmt!Function
gemmt!(uplo, tA, tB, alpha, A, B, beta, C)

Update the lower or upper triangular part specified byuplo ofC asalpha*A*B + beta*C or the other variants according totA andtB. Return the updatedC.

Julia 1.11

gemmt! requires at least Julia 1.11.

LinearAlgebra.BLAS.gemmtMethod
gemmt(uplo, tA, tB, alpha, A, B)

Return the lower or upper triangular part specified byuplo ofA*B or the other three variants according totA andtB.

Julia 1.11

gemmt requires at least Julia 1.11.

LinearAlgebra.BLAS.gemmtMethod
gemmt(uplo, tA, tB, A, B)

Return the lower or upper triangular part specified byuplo ofA*B or the other three variants according totA andtB.

Julia 1.11

gemmt requires at least Julia 1.11.

LinearAlgebra.BLAS.gemm!Function
gemm!(tA, tB, alpha, A, B, beta, C)

UpdateC asalpha*A*B + beta*C or the other three variants according totA andtB. Return the updatedC.

LinearAlgebra.BLAS.gemmMethod
gemm(tA, tB, alpha, A, B)

Returnalpha*A*B or the other three variants according totA andtB.

LinearAlgebra.BLAS.gemmMethod
gemm(tA, tB, A, B)

ReturnA*B or the other three variants according totA andtB.

LinearAlgebra.BLAS.symm!Function
symm!(side, ul, alpha, A, B, beta, C)

UpdateC asalpha*A*B + beta*C oralpha*B*A + beta*C according toside.A is assumed to be symmetric. Only theul triangle ofA is used. Return the updatedC.

LinearAlgebra.BLAS.symmMethod
symm(side, ul, alpha, A, B)

Returnalpha*A*B oralpha*B*A according toside.A is assumed to be symmetric. Only theul triangle ofA is used.

LinearAlgebra.BLAS.symmMethod
symm(side, ul, A, B)

ReturnA*B orB*A according toside.A is assumed to be symmetric. Only theul triangle ofA is used.

LinearAlgebra.BLAS.hemm!Function
hemm!(side, ul, alpha, A, B, beta, C)

UpdateC asalpha*A*B + beta*C oralpha*B*A + beta*C according toside.A is assumed to be Hermitian. Only theul triangle ofA is used. Return the updatedC.

LinearAlgebra.BLAS.hemmMethod
hemm(side, ul, alpha, A, B)

Returnalpha*A*B oralpha*B*A according toside.A is assumed to be Hermitian. Only theul triangle ofA is used.

LinearAlgebra.BLAS.hemmMethod
hemm(side, ul, A, B)

ReturnA*B orB*A according toside.A is assumed to be Hermitian. Only theul triangle ofA is used.

LinearAlgebra.BLAS.syrk!Function
syrk!(uplo, trans, alpha, A, beta, C)

Rank-k update of the symmetric matrixC asalpha*A*transpose(A) + beta*C oralpha*transpose(A)*A + beta*C according totrans. Only theuplo triangle ofC is used. ReturnC.

LinearAlgebra.BLAS.syrkFunction
syrk(uplo, trans, alpha, A)

Return either the upper triangle or the lower triangle ofA, according touplo, ofalpha*A*transpose(A) oralpha*transpose(A)*A, according totrans.

LinearAlgebra.BLAS.herk!Function
herk!(uplo, trans, alpha, A, beta, C)

Methods for complex arrays only. Rank-k update of the Hermitian matrixC asalpha*A*A' + beta*C oralpha*A'*A + beta*C according totrans. Only theuplo triangle ofC is updated. ReturnsC.

LinearAlgebra.BLAS.herkFunction
herk(uplo, trans, alpha, A)

Methods for complex arrays only. Returns theuplo triangle ofalpha*A*A' oralpha*A'*A, according totrans.

LinearAlgebra.BLAS.syr2k!Function
syr2k!(uplo, trans, alpha, A, B, beta, C)

Rank-2k update of the symmetric matrixC asalpha*A*transpose(B) + alpha*B*transpose(A) + beta*C oralpha*transpose(A)*B + alpha*transpose(B)*A + beta*C according totrans. Only theuplo triangle ofC is used. ReturnsC.

LinearAlgebra.BLAS.syr2kFunction
syr2k(uplo, trans, alpha, A, B)

Returns theuplo triangle ofalpha*A*transpose(B) + alpha*B*transpose(A) oralpha*transpose(A)*B + alpha*transpose(B)*A, according totrans.

syr2k(uplo, trans, A, B)

Return theuplo triangle ofA*transpose(B) + B*transpose(A) ortranspose(A)*B + transpose(B)*A, according totrans.

LinearAlgebra.BLAS.her2k!Function
her2k!(uplo, trans, alpha, A, B, beta, C)

Rank-2k update of the Hermitian matrixC asalpha*A*B' + alpha*B*A' + beta*C oralpha*A'*B + alpha*B'*A + beta*C according totrans. The scalarbeta has to be real. Only theuplo triangle ofC is used. ReturnC.

LinearAlgebra.BLAS.her2kFunction
her2k(uplo, trans, alpha, A, B)

Return theuplo triangle ofalpha*A*B' + alpha*B*A' oralpha*A'*B + alpha*B'*A, according totrans.

her2k(uplo, trans, A, B)

Return theuplo triangle ofA*B' + B*A' orA'*B + B'*A, according totrans.

LinearAlgebra.BLAS.trmm!Function
trmm!(side, ul, tA, dA, alpha, A, B)

UpdateB asalpha*A*B or one of the other three variants determined byside andtA. Only theul triangle ofA is used.dA determines if the diagonal values are read or are assumed to be all ones. Return the updatedB.

LinearAlgebra.BLAS.trmmFunction
trmm(side, ul, tA, dA, alpha, A, B)

Returnalpha*A*B or one of the other three variants determined byside andtA. Only theul triangle ofA is used.dA determines if the diagonal values are read or are assumed to be all ones.

LinearAlgebra.BLAS.trsm!Function
trsm!(side, ul, tA, dA, alpha, A, B)

OverwriteB with the solution toA*X = alpha*B or one of the other three variants determined byside andtA. Only theul triangle ofA is used.dA determines if the diagonal values are read or are assumed to be all ones. Returns the updatedB.

LinearAlgebra.BLAS.trsmFunction
trsm(side, ul, tA, dA, alpha, A, B)

Return the solution toA*X = alpha*B or one of the other three variants determined by determined byside andtA. Only theul triangle ofA is used.dA determines if the diagonal values are read or are assumed to be all ones.

LAPACK functions

LinearAlgebra.LAPACK provides wrappers for some of the LAPACK functions for linear algebra. Those functions that overwrite one of the input arrays have names ending in'!'.

Usually a function has 4 methods defined, one each forFloat64,Float32,ComplexF64 andComplexF32 arrays.

Note that the LAPACK API provided by Julia can and will change in the future. Since this API is not user-facing, there is no commitment to support/deprecate this specific set of functions in future releases.

LinearAlgebra.LAPACKModule

Interfaces to LAPACK subroutines.

LinearAlgebra.LAPACK.gbtrf!Function
gbtrf!(kl, ku, m, AB) -> (AB, ipiv)

Compute the LU factorization of a banded matrixAB.kl is the first subdiagonal containing a nonzero band,ku is the last superdiagonal containing one, andm is the first dimension of the matrixAB. Returns the LU factorization in-place andipiv, the vector of pivots used.

LinearAlgebra.LAPACK.gbtrs!Function
gbtrs!(trans, kl, ku, m, AB, ipiv, B)

Solve the equationAB * X = B.trans determines the orientation ofAB. It may beN (no transpose),T (transpose), orC (conjugate transpose).kl is the first subdiagonal containing a nonzero band,ku is the last superdiagonal containing one, andm is the first dimension of the matrixAB.ipiv is the vector of pivots returned fromgbtrf!. Returns the vector or matrixX, overwritingB in-place.

LinearAlgebra.LAPACK.gebal!Function
gebal!(job, A) -> (ilo, ihi, scale)

Balance the matrixA before computing its eigensystem or Schur factorization.job can be one ofN (A will not be permuted or scaled),P (A will only be permuted),S (A will only be scaled), orB (A will be both permuted and scaled). ModifiesA in-place and returnsilo,ihi, andscale. If permuting was turned on,A[i,j] = 0 ifj > i and1 < j < ilo orj > ihi.scale contains information about the scaling/permutations performed.

LinearAlgebra.LAPACK.gebak!Function
gebak!(job, side, ilo, ihi, scale, V)

Transform the eigenvectorsV of a matrix balanced usinggebal! to the unscaled/unpermuted eigenvectors of the original matrix. ModifiesV in-place.side can beL (left eigenvectors are transformed) orR (right eigenvectors are transformed).

LinearAlgebra.LAPACK.gebrd!Function
gebrd!(A) -> (A, d, e, tauq, taup)

ReduceA in-place to bidiagonal formA = QBP'. ReturnsA, containing the bidiagonal matrixB;d, containing the diagonal elements ofB;e, containing the off-diagonal elements ofB;tauq, containing the elementary reflectors representingQ; andtaup, containing the elementary reflectors representingP.

LinearAlgebra.LAPACK.gelqf!Function
gelqf!(A, tau)

Compute theLQ factorization ofA,A = LQ.tau contains scalars which parameterize the elementary reflectors of the factorization.tau must have length greater than or equal to the smallest dimension ofA.

ReturnsA andtau modified in-place.

gelqf!(A) -> (A, tau)

Compute theLQ factorization ofA,A = LQ.

ReturnsA, modified in-place, andtau, which contains scalars which parameterize the elementary reflectors of the factorization.

LinearAlgebra.LAPACK.geqlf!Function
geqlf!(A, tau)

Compute theQL factorization ofA,A = QL.tau contains scalars which parameterize the elementary reflectors of the factorization.tau must have length greater than or equal to the smallest dimension ofA.

ReturnsA andtau modified in-place.

geqlf!(A) -> (A, tau)

Compute theQL factorization ofA,A = QL.

ReturnsA, modified in-place, andtau, which contains scalars which parameterize the elementary reflectors of the factorization.

LinearAlgebra.LAPACK.geqrf!Function
geqrf!(A, tau)

Compute theQR factorization ofA,A = QR.tau contains scalars which parameterize the elementary reflectors of the factorization.tau must have length greater than or equal to the smallest dimension ofA.

ReturnsA andtau modified in-place.

geqrf!(A) -> (A, tau)

Compute theQR factorization ofA,A = QR.

ReturnsA, modified in-place, andtau, which contains scalars which parameterize the elementary reflectors of the factorization.

LinearAlgebra.LAPACK.geqp3!Function
geqp3!(A, [jpvt, tau]) -> (A, tau, jpvt)

Compute the pivotedQR factorization ofA,AP = QR using BLAS level 3.P is a pivoting matrix, represented byjpvt.tau stores the elementary reflectors. The argumentsjpvt andtau are optional and allow for passing preallocated arrays. When passed,jpvt must have length greater than or equal ton ifA is an(m x n) matrix andtau must have length greater than or equal to the smallest dimension ofA. On entry, ifjpvt[j] does not equal zero then thejth column ofA is permuted to the front ofAP.

A,jpvt, andtau are modified in-place.

LinearAlgebra.LAPACK.gerqf!Function
gerqf!(A, tau)

Compute theRQ factorization ofA,A = RQ.tau contains scalars which parameterize the elementary reflectors of the factorization.tau must have length greater than or equal to the smallest dimension ofA.

ReturnsA andtau modified in-place.

gerqf!(A) -> (A, tau)

Compute theRQ factorization ofA,A = RQ.

ReturnsA, modified in-place, andtau, which contains scalars which parameterize the elementary reflectors of the factorization.

LinearAlgebra.LAPACK.geqrt!Function
geqrt!(A, T)

Compute the blockedQR factorization ofA,A = QR.T contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension ofT sets the block size and it must be between 1 andn. The second dimension ofT must equal the smallest dimension ofA.

ReturnsA andT modified in-place.

geqrt!(A, nb) -> (A, T)

Compute the blockedQR factorization ofA,A = QR.nb sets the block size and it must be between 1 andn, the second dimension ofA.

ReturnsA, modified in-place, andT, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.

LinearAlgebra.LAPACK.geqrt3!Function
geqrt3!(A, T)

Recursively computes the blockedQR factorization ofA,A = QR.T contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension ofT sets the block size and it must be between 1 andn. The second dimension ofT must equal the smallest dimension ofA.

ReturnsA andT modified in-place.

geqrt3!(A) -> (A, T)

Recursively computes the blockedQR factorization ofA,A = QR.

ReturnsA, modified in-place, andT, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.

LinearAlgebra.LAPACK.getrf!Function
getrf!(A, ipiv) -> (A, ipiv, info)

Compute the pivotedLU factorization ofA,A = LU.ipiv contains the pivoting information andinfo a code which indicates success (info = 0), a singular value inU (info = i, in which caseU[i,i] is singular), or an error code (info < 0).

getrf!(A) -> (A, ipiv, info)

Compute the pivotedLU factorization ofA,A = LU.

ReturnsA, modified in-place,ipiv, the pivoting information, and aninfo code which indicates success (info = 0), a singular value inU (info = i, in which caseU[i,i] is singular), or an error code (info < 0).

LinearAlgebra.LAPACK.tzrzf!Function
tzrzf!(A) -> (A, tau)

Transforms the upper trapezoidal matrixA to upper triangular form in-place. ReturnsA andtau, the scalar parameters for the elementary reflectors of the transformation.

LinearAlgebra.LAPACK.ormrz!Function
ormrz!(side, trans, A, tau, C)

Multiplies the matrixC byQ from the transformation supplied bytzrzf!. Depending onside ortrans the multiplication can be left-sided (side = L, Q*C) or right-sided (side = R, C*Q) andQ can be unmodified (trans = N), transposed (trans = T), or conjugate transposed (trans = C). Returns matrixC which is modified in-place with the result of the multiplication.

LinearAlgebra.LAPACK.gels!Function
gels!(trans, A, B) -> (F, B, ssr)

Solves the linear equationA * X = B,transpose(A) * X = B, oradjoint(A) * X = B using a QR or LQ factorization. Modifies the matrix/vectorB in place with the solution.A is overwritten with itsQR orLQ factorization.trans may be one ofN (no modification),T (transpose), orC (conjugate transpose).gels! searches for the minimum norm/least squares solution.A may be under or over determined. The solution is returned inB.

LinearAlgebra.LAPACK.gesv!Function
gesv!(A, B) -> (B, A, ipiv)

Solves the linear equationA * X = B whereA is a square matrix using theLU factorization ofA.A is overwritten with itsLU factorization andB is overwritten with the solutionX.ipiv contains the pivoting information for theLU factorization ofA.

LinearAlgebra.LAPACK.getrs!Function
getrs!(trans, A, ipiv, B)

Solves the linear equationA * X = B,transpose(A) * X = B, oradjoint(A) * X = B for squareA. Modifies the matrix/vectorB in place with the solution.A is theLU factorization fromgetrf!, withipiv the pivoting information.trans may be one ofN (no modification),T (transpose), orC (conjugate transpose).

LinearAlgebra.LAPACK.getri!Function
getri!(A, ipiv)

Computes the inverse ofA, using itsLU factorization found bygetrf!.ipiv is the pivot information output andA contains theLU factorization ofgetrf!.A is overwritten with its inverse.

LinearAlgebra.LAPACK.gesvx!Function
gesvx!(fact, trans, A, AF, ipiv, equed, R, C, B) -> (X, equed, R, C, B, rcond, ferr, berr, work)

Solves the linear equationA * X = B (trans = N),transpose(A) * X = B (trans = T), oradjoint(A) * X = B (trans = C) using theLU factorization ofA.fact may beE, in which caseA will be equilibrated and copied toAF;F, in which caseAF andipiv from a previousLU factorization are inputs; orN, in which caseA will be copied toAF and then factored. Iffact = F,equed may beN, meaningA has not been equilibrated;R, meaningA was multiplied byDiagonal(R) from the left;C, meaningA was multiplied byDiagonal(C) from the right; orB, meaningA was multiplied byDiagonal(R) from the left andDiagonal(C) from the right. Iffact = F andequed = R orB the elements ofR must all be positive. Iffact = F andequed = C orB the elements ofC must all be positive.

Returns the solutionX;equed, which is an output iffact is notN, and describes the equilibration that was performed;R, the row equilibration diagonal;C, the column equilibration diagonal;B, which may be overwritten with its equilibrated formDiagonal(R)*B (iftrans = N andequed = R,B) orDiagonal(C)*B (iftrans = T,C andequed = C,B);rcond, the reciprocal condition number ofA after equilbrating;ferr, the forward error bound for each solution vector inX;berr, the forward error bound for each solution vector inX; andwork, the reciprocal pivot growth factor.

gesvx!(A, B)

The no-equilibration, no-transpose simplification ofgesvx!.

LinearAlgebra.LAPACK.gelsd!Function
gelsd!(A, B, rcond) -> (B, rnk)

Computes the least norm solution ofA * X = B by finding theSVD factorization ofA, then dividing-and-conquering the problem.B is overwritten with the solutionX. Singular values belowrcond will be treated as zero. Returns the solution inB and the effective rank ofA inrnk.

LinearAlgebra.LAPACK.gelsy!Function
gelsy!(A, B, rcond) -> (B, rnk)

Computes the least norm solution ofA * X = B by finding the fullQR factorization ofA, then dividing-and-conquering the problem.B is overwritten with the solutionX. Singular values belowrcond will be treated as zero. Returns the solution inB and the effective rank ofA inrnk.

LinearAlgebra.LAPACK.gglse!Function
gglse!(A, c, B, d) -> (X,res)

Solves the equationA * x = c wherex is subject to the equality constraintB * x = d. Uses the formula||c - A*x||^2 = 0 to solve. ReturnsX and the residual sum-of-squares.

LinearAlgebra.LAPACK.geev!Function
geev!(jobvl, jobvr, A) -> (W, VL, VR)

Finds the eigensystem ofA. Ifjobvl = N, the left eigenvectors ofA aren't computed. Ifjobvr = N, the right eigenvectors ofA aren't computed. Ifjobvl = V orjobvr = V, the corresponding eigenvectors are computed. Returns the eigenvalues inW, the right eigenvectors inVR, and the left eigenvectors inVL.

LinearAlgebra.LAPACK.gesdd!Function
gesdd!(job, A) -> (U, S, VT)

Finds the singular value decomposition ofA,A = U * S * V', using a divide and conquer approach. Ifjob = A, all the columns ofU and the rows ofV' are computed. Ifjob = N, no columns ofU or rows ofV' are computed. Ifjob = O,A is overwritten with the columns of (thin)U and the rows of (thin)V'. Ifjob = S, the columns of (thin)U and the rows of (thin)V' are computed and returned separately.

LinearAlgebra.LAPACK.gesvd!Function
gesvd!(jobu, jobvt, A) -> (U, S, VT)

Finds the singular value decomposition ofA,A = U * S * V'. Ifjobu = A, all the columns ofU are computed. Ifjobvt = A all the rows ofV' are computed. Ifjobu = N, no columns ofU are computed. Ifjobvt = N no rows ofV' are computed. Ifjobu = O,A is overwritten with the columns of (thin)U. Ifjobvt = O,A is overwritten with the rows of (thin)V'. Ifjobu = S, the columns of (thin)U are computed and returned separately. Ifjobvt = S the rows of (thin)V' are computed and returned separately.jobu andjobvt can't both beO.

ReturnsU,S, andVt, whereS are the singular values ofA.

LinearAlgebra.LAPACK.ggsvd!Function
ggsvd!(jobu, jobv, jobq, A, B) -> (U, V, Q, alpha, beta, k, l, R)

Finds the generalized singular value decomposition ofA andB,U'*A*Q = D1*R andV'*B*Q = D2*R.D1 hasalpha on its diagonal andD2 hasbeta on its diagonal. Ifjobu = U, the orthogonal/unitary matrixU is computed. Ifjobv = V the orthogonal/unitary matrixV is computed. Ifjobq = Q, the orthogonal/unitary matrixQ is computed. Ifjobu,jobv orjobq isN, that matrix is not computed. This function is only available in LAPACK versions prior to 3.6.0.

LinearAlgebra.LAPACK.ggsvd3!Function
ggsvd3!(jobu, jobv, jobq, A, B) -> (U, V, Q, alpha, beta, k, l, R)

Finds the generalized singular value decomposition ofA andB,U'*A*Q = D1*R andV'*B*Q = D2*R.D1 hasalpha on its diagonal andD2 hasbeta on its diagonal. Ifjobu = U, the orthogonal/unitary matrixU is computed. Ifjobv = V the orthogonal/unitary matrixV is computed. Ifjobq = Q, the orthogonal/unitary matrixQ is computed. Ifjobu,jobv, orjobq isN, that matrix is not computed. This function requires LAPACK 3.6.0.

LinearAlgebra.LAPACK.geevx!Function
geevx!(balanc, jobvl, jobvr, sense, A) -> (A, w, VL, VR, ilo, ihi, scale, abnrm, rconde, rcondv)

Finds the eigensystem ofA with matrix balancing. Ifjobvl = N, the left eigenvectors ofA aren't computed. Ifjobvr = N, the right eigenvectors ofA aren't computed. Ifjobvl = V orjobvr = V, the corresponding eigenvectors are computed. Ifbalanc = N, no balancing is performed. Ifbalanc = P,A is permuted but not scaled. Ifbalanc = S,A is scaled but not permuted. Ifbalanc = B,A is permuted and scaled. Ifsense = N, no reciprocal condition numbers are computed. Ifsense = E, reciprocal condition numbers are computed for the eigenvalues only. Ifsense = V, reciprocal condition numbers are computed for the right eigenvectors only. Ifsense = B, reciprocal condition numbers are computed for the right eigenvectors and the eigenvectors. Ifsense = E,B, the right and left eigenvectors must be computed.

LinearAlgebra.LAPACK.ggev!Function
ggev!(jobvl, jobvr, A, B) -> (alpha, beta, vl, vr)

Finds the generalized eigendecomposition ofA andB. Ifjobvl = N, the left eigenvectors aren't computed. Ifjobvr = N, the right eigenvectors aren't computed. Ifjobvl = V orjobvr = V, the corresponding eigenvectors are computed.

LinearAlgebra.LAPACK.ggev3!Function
ggev3!(jobvl, jobvr, A, B) -> (alpha, beta, vl, vr)

Finds the generalized eigendecomposition ofA andB using a blocked algorithm. Ifjobvl = N, the left eigenvectors aren't computed. Ifjobvr = N, the right eigenvectors aren't computed. Ifjobvl = V orjobvr = V, the corresponding eigenvectors are computed. This function requires LAPACK 3.6.0.

LinearAlgebra.LAPACK.gtsv!Function
gtsv!(dl, d, du, B)

Solves the equationA * X = B whereA is a tridiagonal matrix withdl on the subdiagonal,d on the diagonal, anddu on the superdiagonal.

OverwritesB with the solutionX and returns it.

LinearAlgebra.LAPACK.gttrf!Function
gttrf!(dl, d, du) -> (dl, d, du, du2, ipiv)

Finds theLU factorization of a tridiagonal matrix withdl on the subdiagonal,d on the diagonal, anddu on the superdiagonal.

Modifiesdl,d, anddu in-place and returns them and the second superdiagonaldu2 and the pivoting vectoripiv.

LinearAlgebra.LAPACK.gttrs!Function
gttrs!(trans, dl, d, du, du2, ipiv, B)

Solves the equationA * X = B (trans = N),transpose(A) * X = B (trans = T), oradjoint(A) * X = B (trans = C) using theLU factorization computed bygttrf!.B is overwritten with the solutionX.

LinearAlgebra.LAPACK.orglq!Function
orglq!(A, tau, k = length(tau))

Explicitly finds the matrixQ of aLQ factorization after callinggelqf! onA. Uses the output ofgelqf!.A is overwritten byQ.

LinearAlgebra.LAPACK.orgqr!Function
orgqr!(A, tau, k = length(tau))

Explicitly finds the matrixQ of aQR factorization after callinggeqrf! onA. Uses the output ofgeqrf!.A is overwritten byQ.

LinearAlgebra.LAPACK.orgql!Function
orgql!(A, tau, k = length(tau))

Explicitly finds the matrixQ of aQL factorization after callinggeqlf! onA. Uses the output ofgeqlf!.A is overwritten byQ.

LinearAlgebra.LAPACK.orgrq!Function
orgrq!(A, tau, k = length(tau))

Explicitly finds the matrixQ of aRQ factorization after callinggerqf! onA. Uses the output ofgerqf!.A is overwritten byQ.

LinearAlgebra.LAPACK.ormlq!Function
ormlq!(side, trans, A, tau, C)

ComputesQ * C (trans = N),transpose(Q) * C (trans = T),adjoint(Q) * C (trans = C) forside = L or the equivalent right-sided multiplication forside = R usingQ from aLQ factorization ofA computed usinggelqf!.C is overwritten.

LinearAlgebra.LAPACK.ormqr!Function
ormqr!(side, trans, A, tau, C)

ComputesQ * C (trans = N),transpose(Q) * C (trans = T),adjoint(Q) * C (trans = C) forside = L or the equivalent right-sided multiplication forside = R usingQ from aQR factorization ofA computed usinggeqrf!.C is overwritten.

LinearAlgebra.LAPACK.ormql!Function
ormql!(side, trans, A, tau, C)

ComputesQ * C (trans = N),transpose(Q) * C (trans = T),adjoint(Q) * C (trans = C) forside = L or the equivalent right-sided multiplication forside = R usingQ from aQL factorization ofA computed usinggeqlf!.C is overwritten.

LinearAlgebra.LAPACK.ormrq!Function
ormrq!(side, trans, A, tau, C)

ComputesQ * C (trans = N),transpose(Q) * C (trans = T),adjoint(Q) * C (trans = C) forside = L or the equivalent right-sided multiplication forside = R usingQ from aRQ factorization ofA computed usinggerqf!.C is overwritten.

LinearAlgebra.LAPACK.gemqrt!Function
gemqrt!(side, trans, V, T, C)

ComputesQ * C (trans = N),transpose(Q) * C (trans = T),adjoint(Q) * C (trans = C) forside = L or the equivalent right-sided multiplication forside = R usingQ from aQR factorization ofA computed usinggeqrt!.C is overwritten.

LinearAlgebra.LAPACK.posv!Function
posv!(uplo, A, B) -> (A, B)

Finds the solution toA * X = B whereA is a symmetric or Hermitian positive definite matrix. Ifuplo = U the upper Cholesky decomposition ofA is computed. Ifuplo = L the lower Cholesky decomposition ofA is computed.A is overwritten by its Cholesky decomposition.B is overwritten with the solutionX.

LinearAlgebra.LAPACK.potrf!Function
potrf!(uplo, A)

Computes the Cholesky (upper ifuplo = U, lower ifuplo = L) decomposition of positive-definite matrixA.A is overwritten and returned with an info code.

LinearAlgebra.LAPACK.potri!Function
potri!(uplo, A)

Computes the inverse of positive-definite matrixA after callingpotrf! to find its (upper ifuplo = U, lower ifuplo = L) Cholesky decomposition.

A is overwritten by its inverse and returned.

LinearAlgebra.LAPACK.potrs!Function
potrs!(uplo, A, B)

Finds the solution toA * X = B whereA is a symmetric or Hermitian positive definite matrix whose Cholesky decomposition was computed bypotrf!. Ifuplo = U the upper Cholesky decomposition ofA was computed. Ifuplo = L the lower Cholesky decomposition ofA was computed.B is overwritten with the solutionX.

LinearAlgebra.LAPACK.pstrf!Function
pstrf!(uplo, A, tol) -> (A, piv, rank, info)

Computes the (upper ifuplo = U, lower ifuplo = L) pivoted Cholesky decomposition of positive-definite matrixA with a user-set tolerancetol.A is overwritten by its Cholesky decomposition.

ReturnsA, the pivotspiv, the rank ofA, and aninfo code. Ifinfo = 0, the factorization succeeded. Ifinfo = i > 0, thenA is indefinite or rank-deficient.

LinearAlgebra.LAPACK.ptsv!Function
ptsv!(D, E, B)

SolvesA * X = B for positive-definite tridiagonalA.D is the diagonal ofA andE is the off-diagonal.B is overwritten with the solutionX and returned.

LinearAlgebra.LAPACK.pttrf!Function
pttrf!(D, E)

Computes the LDLt factorization of a positive-definite tridiagonal matrix withD as diagonal andE as off-diagonal.D andE are overwritten and returned.

LinearAlgebra.LAPACK.pttrs!Function
pttrs!(D, E, B)

SolvesA * X = B for positive-definite tridiagonalA with diagonalD and off-diagonalE after computingA's LDLt factorization usingpttrf!.B is overwritten with the solutionX.

LinearAlgebra.LAPACK.trtri!Function
trtri!(uplo, diag, A)

Finds the inverse of (upper ifuplo = U, lower ifuplo = L) triangular matrixA. Ifdiag = N,A has non-unit diagonal elements. Ifdiag = U, all diagonal elements ofA are one.A is overwritten with its inverse.

LinearAlgebra.LAPACK.trtrs!Function
trtrs!(uplo, trans, diag, A, B)

SolvesA * X = B (trans = N),transpose(A) * X = B (trans = T), oradjoint(A) * X = B (trans = C) for (upper ifuplo = U, lower ifuplo = L) triangular matrixA. Ifdiag = N,A has non-unit diagonal elements. Ifdiag = U, all diagonal elements ofA are one.B is overwritten with the solutionX.

LinearAlgebra.LAPACK.trcon!Function
trcon!(norm, uplo, diag, A)

Finds the reciprocal condition number of (upper ifuplo = U, lower ifuplo = L) triangular matrixA. Ifdiag = N,A has non-unit diagonal elements. Ifdiag = U, all diagonal elements ofA are one. Ifnorm = I, the condition number is found in the infinity norm. Ifnorm = O or1, the condition number is found in the one norm.

LinearAlgebra.LAPACK.trevc!Function
trevc!(side, howmny, select, T, VL = similar(T), VR = similar(T))

Finds the eigensystem of an upper triangular matrixT. Ifside = R, the right eigenvectors are computed. Ifside = L, the left eigenvectors are computed. Ifside = B, both sets are computed. Ifhowmny = A, all eigenvectors are found. Ifhowmny = B, all eigenvectors are found and backtransformed usingVL andVR. Ifhowmny = S, only the eigenvectors corresponding to the values inselect are computed.

LinearAlgebra.LAPACK.trrfs!Function
trrfs!(uplo, trans, diag, A, B, X, Ferr, Berr) -> (Ferr, Berr)

Estimates the error in the solution toA * X = B (trans = N),transpose(A) * X = B (trans = T),adjoint(A) * X = B (trans = C) forside = L, or the equivalent equations a right-handedside = RX * A after computingX usingtrtrs!. Ifuplo = U,A is upper triangular. Ifuplo = L,A is lower triangular. Ifdiag = N,A has non-unit diagonal elements. Ifdiag = U, all diagonal elements ofA are one.Ferr andBerr are optional inputs.Ferr is the forward error andBerr is the backward error, each component-wise.

LinearAlgebra.LAPACK.stev!Function
stev!(job, dv, ev) -> (dv, Zmat)

Computes the eigensystem for a symmetric tridiagonal matrix withdv as diagonal andev as off-diagonal. Ifjob = N only the eigenvalues are found and returned indv. Ifjob = V then the eigenvectors are also found and returned inZmat.

LinearAlgebra.LAPACK.stebz!Function
stebz!(range, order, vl, vu, il, iu, abstol, dv, ev) -> (dv, iblock, isplit)

Computes the eigenvalues for a symmetric tridiagonal matrix withdv as diagonal andev as off-diagonal. Ifrange = A, all the eigenvalues are found. Ifrange = V, the eigenvalues in the half-open interval(vl, vu] are found. Ifrange = I, the eigenvalues with indices betweenil andiu are found. Iforder = B, eigvalues are ordered within a block. Iforder = E, they are ordered across all the blocks.abstol can be set as a tolerance for convergence.

LinearAlgebra.LAPACK.stegr!Function
stegr!(jobz, range, dv, ev, vl, vu, il, iu) -> (w, Z)

Computes the eigenvalues (jobz = N) or eigenvalues and eigenvectors (jobz = V) for a symmetric tridiagonal matrix withdv as diagonal andev as off-diagonal. Ifrange = A, all the eigenvalues are found. Ifrange = V, the eigenvalues in the half-open interval(vl, vu] are found. Ifrange = I, the eigenvalues with indices betweenil andiu are found. The eigenvalues are returned inw and the eigenvectors inZ.

LinearAlgebra.LAPACK.stein!Function
stein!(dv, ev_in, w_in, iblock_in, isplit_in)

Computes the eigenvectors for a symmetric tridiagonal matrix withdv as diagonal andev_in as off-diagonal.w_in specifies the input eigenvalues for which to find corresponding eigenvectors.iblock_in specifies the submatrices corresponding to the eigenvalues inw_in.isplit_in specifies the splitting points between the submatrix blocks.

LinearAlgebra.LAPACK.syconv!Function
syconv!(uplo, A, ipiv) -> (A, work)

Converts a symmetric matrixA (which has been factorized into a triangular matrix) into two matricesL andD. Ifuplo = U,A is upper triangular. Ifuplo = L, it is lower triangular.ipiv is the pivot vector from the triangular factorization.A is overwritten byL andD.

LinearAlgebra.LAPACK.sysv!Function
sysv!(uplo, A, B) -> (B, A, ipiv)

Finds the solution toA * X = B for symmetric matrixA. Ifuplo = U, the upper half ofA is stored. Ifuplo = L, the lower half is stored.B is overwritten by the solutionX.A is overwritten by its Bunch-Kaufman factorization.ipiv contains pivoting information about the factorization.

LinearAlgebra.LAPACK.sytrf!Function
sytrf!(uplo, A) -> (A, ipiv, info)

Computes the Bunch-Kaufman factorization of a symmetric matrixA. Ifuplo = U, the upper half ofA is stored. Ifuplo = L, the lower half is stored.

ReturnsA, overwritten by the factorization, a pivot vectoripiv, and the error codeinfo which is a non-negative integer. Ifinfo is positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo.

sytrf!(uplo, A, ipiv) -> (A, ipiv, info)

Computes the Bunch-Kaufman factorization of a symmetric matrixA. Ifuplo = U, the upper half ofA is stored. Ifuplo = L, the lower half is stored.

ReturnsA, overwritten by the factorization, the pivot vectoripiv, and the error codeinfo which is a non-negative integer. Ifinfo is positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo.

LinearAlgebra.LAPACK.sytri!Function
sytri!(uplo, A, ipiv)

Computes the inverse of a symmetric matrixA using the results ofsytrf!. Ifuplo = U, the upper half ofA is stored. Ifuplo = L, the lower half is stored.A is overwritten by its inverse.

LinearAlgebra.LAPACK.sytrs!Function
sytrs!(uplo, A, ipiv, B)

Solves the equationA * X = B for a symmetric matrixA using the results ofsytrf!. Ifuplo = U, the upper half ofA is stored. Ifuplo = L, the lower half is stored.B is overwritten by the solutionX.

LinearAlgebra.LAPACK.hesv!Function
hesv!(uplo, A, B) -> (B, A, ipiv)

Finds the solution toA * X = B for Hermitian matrixA. Ifuplo = U, the upper half ofA is stored. Ifuplo = L, the lower half is stored.B is overwritten by the solutionX.A is overwritten by its Bunch-Kaufman factorization.ipiv contains pivoting information about the factorization.

LinearAlgebra.LAPACK.hetrf!Function
hetrf!(uplo, A) -> (A, ipiv, info)

Computes the Bunch-Kaufman factorization of a Hermitian matrixA. Ifuplo = U, the upper half ofA is stored. Ifuplo = L, the lower half is stored.

ReturnsA, overwritten by the factorization, a pivot vectoripiv, and the error codeinfo which is a non-negative integer. Ifinfo is positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo.

hetrf!(uplo, A, ipiv) -> (A, ipiv, info)

Computes the Bunch-Kaufman factorization of a Hermitian matrixA. Ifuplo = U, the upper half ofA is stored. Ifuplo = L, the lower half is stored.

ReturnsA, overwritten by the factorization, the pivot vectoripiv, and the error codeinfo which is a non-negative integer. Ifinfo is positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo.

LinearAlgebra.LAPACK.hetri!Function
hetri!(uplo, A, ipiv)

Computes the inverse of a Hermitian matrixA using the results ofsytrf!. Ifuplo = U, the upper half ofA is stored. Ifuplo = L, the lower half is stored.A is overwritten by its inverse.

LinearAlgebra.LAPACK.hetrs!Function
hetrs!(uplo, A, ipiv, B)

Solves the equationA * X = B for a Hermitian matrixA using the results ofsytrf!. Ifuplo = U, the upper half ofA is stored. Ifuplo = L, the lower half is stored.B is overwritten by the solutionX.

LinearAlgebra.LAPACK.syev!Function
syev!(jobz, uplo, A)

Finds the eigenvalues (jobz = N) or eigenvalues and eigenvectors (jobz = V) of a symmetric matrixA. Ifuplo = U, the upper triangle ofA is used. Ifuplo = L, the lower triangle ofA is used.

LinearAlgebra.LAPACK.syevr!Function
syevr!(jobz, range, uplo, A, vl, vu, il, iu, abstol) -> (W, Z)

Finds the eigenvalues (jobz = N) or eigenvalues and eigenvectors (jobz = V) of a symmetric matrixA. Ifuplo = U, the upper triangle ofA is used. Ifuplo = L, the lower triangle ofA is used. Ifrange = A, all the eigenvalues are found. Ifrange = V, the eigenvalues in the half-open interval(vl, vu] are found. Ifrange = I, the eigenvalues with indices betweenil andiu are found.abstol can be set as a tolerance for convergence.

The eigenvalues are returned inW and the eigenvectors inZ.

LinearAlgebra.LAPACK.syevd!Function
syevd!(jobz, uplo, A)

Finds the eigenvalues (jobz = N) or eigenvalues and eigenvectors (jobz = V) of a symmetric matrixA. Ifuplo = U, the upper triangle ofA is used. Ifuplo = L, the lower triangle ofA is used.

Use the divide-and-conquer method, instead of the QR iteration used bysyev! or multiple relatively robust representations used bysyevr!. See James W. Demmel et al, SIAM J. Sci. Comput. 30, 3, 1508 (2008) for a comparison of the accuracy and performatce of different methods.

LinearAlgebra.LAPACK.sygvd!Function
sygvd!(itype, jobz, uplo, A, B) -> (w, A, B)

Finds the generalized eigenvalues (jobz = N) or eigenvalues and eigenvectors (jobz = V) of a symmetric matrixA and symmetric positive-definite matrixB. Ifuplo = U, the upper triangles ofA andB are used. Ifuplo = L, the lower triangles ofA andB are used. Ifitype = 1, the problem to solve isA * x = lambda * B * x. Ifitype = 2, the problem to solve isA * B * x = lambda * x. Ifitype = 3, the problem to solve isB * A * x = lambda * x.

LinearAlgebra.LAPACK.bdsqr!Function
bdsqr!(uplo, d, e_, Vt, U, C) -> (d, Vt, U, C)

Computes the singular value decomposition of a bidiagonal matrix withd on the diagonal ande_ on the off-diagonal. Ifuplo = U,e_ is the superdiagonal. Ifuplo = L,e_ is the subdiagonal. Can optionally also compute the productQ' * C.

Returns the singular values ind, and the matrixC overwritten withQ' * C.

LinearAlgebra.LAPACK.bdsdc!Function
bdsdc!(uplo, compq, d, e_) -> (d, e, u, vt, q, iq)

Computes the singular value decomposition of a bidiagonal matrix withd on the diagonal ande_ on the off-diagonal using a divide and conqueq method. Ifuplo = U,e_ is the superdiagonal. Ifuplo = L,e_ is the subdiagonal. Ifcompq = N, only the singular values are found. Ifcompq = I, the singular values and vectors are found. Ifcompq = P, the singular values and vectors are found in compact form. Only works for real types.

Returns the singular values ind, and ifcompq = P, the compact singular vectors iniq.

LinearAlgebra.LAPACK.gecon!Function
gecon!(normtype, A, anorm)

Finds the reciprocal condition number of matrixA. Ifnormtype = I, the condition number is found in the infinity norm. Ifnormtype = O or1, the condition number is found in the one norm.A must be the result ofgetrf! andanorm is the norm ofA in the relevant norm.

LinearAlgebra.LAPACK.gehrd!Function
gehrd!(ilo, ihi, A) -> (A, tau)

Converts a matrixA to Hessenberg form. IfA is balanced withgebal! thenilo andihi are the outputs ofgebal!. Otherwise they should beilo = 1 andihi = size(A,2).tau contains the elementary reflectors of the factorization.

LinearAlgebra.LAPACK.orghr!Function
orghr!(ilo, ihi, A, tau)

Explicitly findsQ, the orthogonal/unitary matrix fromgehrd!.ilo,ihi,A, andtau must correspond to the input/output togehrd!.

LinearAlgebra.LAPACK.gees!Function
gees!(jobvs, A) -> (A, vs, w)

Computes the eigenvalues (jobvs = N) or the eigenvalues and Schur vectors (jobvs = V) of matrixA.A is overwritten by its Schur form.

ReturnsA,vs containing the Schur vectors, andw, containing the eigenvalues.

LinearAlgebra.LAPACK.gges!Function
gges!(jobvsl, jobvsr, A, B) -> (A, B, alpha, beta, vsl, vsr)

Computes the generalized eigenvalues, generalized Schur form, left Schur vectors (jobsvl = V), or right Schur vectors (jobvsr = V) ofA andB.

The generalized eigenvalues are returned inalpha andbeta. The left Schur vectors are returned invsl and the right Schur vectors are returned invsr.

LinearAlgebra.LAPACK.gges3!Function
gges3!(jobvsl, jobvsr, A, B) -> (A, B, alpha, beta, vsl, vsr)

Computes the generalized eigenvalues, generalized Schur form, left Schur vectors (jobsvl = V), or right Schur vectors (jobvsr = V) ofA andB using a blocked algorithm. This function requires LAPACK 3.6.0.

The generalized eigenvalues are returned inalpha andbeta. The left Schur vectors are returned invsl and the right Schur vectors are returned invsr.

LinearAlgebra.LAPACK.trexc!Function
trexc!(compq, ifst, ilst, T, Q) -> (T, Q)trexc!(ifst, ilst, T, Q) -> (T, Q)

Reorder the Schur factorizationT of a matrix, such that the diagonal block ofT with row indexifst is moved to row indexilst. Ifcompq = V, the Schur vectorsQ are reordered. Ifcompq = N they are not modified. The 4-arg method calls the 5-arg method withcompq = V.

LinearAlgebra.LAPACK.trsen!Function
trsen!(job, compq, select, T, Q) -> (T, Q, w, s, sep)trsen!(select, T, Q) -> (T, Q, w, s, sep)

Reorder the Schur factorization of a matrix and optionally finds reciprocal condition numbers. Ifjob = N, no condition numbers are found. Ifjob = E, only the condition number for this cluster of eigenvalues is found. Ifjob = V, only the condition number for the invariant subspace is found. Ifjob = B then the condition numbers for the cluster and subspace are found. Ifcompq = V the Schur vectorsQ are updated. Ifcompq = N the Schur vectors are not modified.select determines which eigenvalues are in the cluster. The 3-arg method calls the 5-arg method withjob = N andcompq = V.

ReturnsT,Q, reordered eigenvalues inw, the condition number of the cluster of eigenvaluess, and the condition number of the invariant subspacesep.

LinearAlgebra.LAPACK.tgsen!Function
tgsen!(select, S, T, Q, Z) -> (S, T, alpha, beta, Q, Z)

Reorders the vectors of a generalized Schur decomposition.select specifies the eigenvalues in each cluster.

LinearAlgebra.LAPACK.trsyl!Function
trsyl!(transa, transb, A, B, C, isgn=1) -> (C, scale)

Solves the Sylvester matrix equationA * X +/- X * B = scale*C whereA andB are both quasi-upper triangular. Iftransa = N,A is not modified. Iftransa = T,A is transposed. Iftransa = C,A is conjugate transposed. Similarly fortransb andB. Ifisgn = 1, the equationA * X + X * B = scale * C is solved. Ifisgn = -1, the equationA * X - X * B = scale * C is solved.

ReturnsX (overwritingC) andscale.

LinearAlgebra.LAPACK.hseqr!Function
hseqr!(job, compz, ilo, ihi, H, Z) -> (H, Z, w)

Computes all eigenvalues and (optionally) the Schur factorization of a matrix reduced to Hessenberg form. IfH is balanced withgebal! thenilo andihi are the outputs ofgebal!. Otherwise they should beilo = 1 andihi = size(H,2).tau contains the elementary reflectors of the factorization.

  • ACM832Davis, Timothy A. (2004b). Algorithm 832: UMFPACK V4.3–-an Unsymmetric-Pattern Multifrontal Method. ACM Trans. Math. Softw., 30(2), 196–199.doi:10.1145/992200.992206
  • ACM887Chen, Y., Davis, T. A., Hager, W. W., & Rajamanickam, S. (2008). Algorithm 887: CHOLMOD, Supernodal Sparse Cholesky Factorization and Update/Downdate. ACM Trans. Math. Softw., 35(3).doi:10.1145/1391989.1391995
  • DavisHager2009Davis, Timothy A., & Hager, W. W. (2009). Dynamic Supernodes in Sparse Cholesky Update/Downdate and Triangular Solves. ACM Trans. Math. Softw., 35(4).doi:10.1145/1462173.1462176
  • Bischof1987C Bischof and C Van Loan, "The WY representation for products of Householder matrices", SIAM J Sci Stat Comput 8 (1987), s2-s13.doi:10.1137/0908009
  • Schreiber1989R Schreiber and C Van Loan, "A storage-efficient WY representation for products of Householder transformations", SIAM J Sci Stat Comput 10 (1989), 53-57.doi:10.1137/0910005
  • ACM933Foster, L. V., & Davis, T. A. (2013). Algorithm 933: Reliable Calculation of Numerical Rank, Null Space Bases, Pseudoinverse Solutions, and Basic Solutions Using SuitesparseQR. ACM Trans. Math. Softw., 40(1).doi:10.1145/2513109.2513116
  • Bunch1977J R Bunch and L Kaufman, Some stable methods for calculating inertia and solving symmetric linear systems, Mathematics of Computation 31:137 (1977), 163-179.url.
  • issue8859Issue 8859, "Fix least squares",https://github.com/JuliaLang/julia/pull/8859
  • B96Åke Björck, "Numerical Methods for Least Squares Problems", SIAM Press, Philadelphia, 1996, "Other Titles in Applied Mathematics", Vol. 51.doi:10.1137/1.9781611971484
  • S84G. W. Stewart, "Rank Degeneracy", SIAM Journal on Scientific and Statistical Computing, 5(2), 1984, 403-413.doi:10.1137/0905030
  • KY88Konstantinos Konstantinides and Kung Yao, "Statistical analysis of effective singular values in matrix rank determination", IEEE Transactions on Acoustics, Speech and Signal Processing, 36(5), 1988, 757-763.doi:10.1109/29.1585
  • H05Nicholas J. Higham, "The squaring and scaling method for the matrix exponential revisited", SIAM Journal on Matrix Analysis and Applications, 26(4), 2005, 1179-1193.doi:10.1137/090768539
  • AH12Awad H. Al-Mohy and Nicholas J. Higham, "Improved inverse scaling and squaring algorithms for the matrix logarithm", SIAM Journal on Scientific Computing, 34(4), 2012, C153-C169.doi:10.1137/110852553
  • AHR13Awad H. Al-Mohy, Nicholas J. Higham and Samuel D. Relton, "Computing the Fréchet derivative of the matrix logarithm and estimating the condition number", SIAM Journal on Scientific Computing, 35(4), 2013, C394-C410.doi:10.1137/120885991
  • BH83Åke Björck and Sven Hammarling, "A Schur method for the square root of a matrix", Linear Algebra and its Applications, 52-53, 1983, 127-140.doi:10.1016/0024-3795(83)80010-X
  • H87Nicholas J. Higham, "Computing real square roots of a real matrix", Linear Algebra and its Applications, 88-89, 1987, 405-430.doi:10.1016/0024-3795(87)90118-2
  • S03Matthew I. Smith, "A Schur Algorithm for Computing Matrix pth Roots", SIAM Journal on Matrix Analysis and Applications, vol. 24, 2003, pp. 971–989.doi:10.1137/S0895479801392697
  • AH16_1Mary Aprahamian and Nicholas J. Higham, "Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms", MIMS EPrint: 2016.4.https://doi.org/10.1137/16M1057577
  • AH16_2Mary Aprahamian and Nicholas J. Higham, "Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms", MIMS EPrint: 2016.4.https://doi.org/10.1137/16M1057577
  • AH16_3Mary Aprahamian and Nicholas J. Higham, "Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms", MIMS EPrint: 2016.4.https://doi.org/10.1137/16M1057577
  • AH16_4Mary Aprahamian and Nicholas J. Higham, "Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms", MIMS EPrint: 2016.4.https://doi.org/10.1137/16M1057577
  • AH16_5Mary Aprahamian and Nicholas J. Higham, "Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms", MIMS EPrint: 2016.4.https://doi.org/10.1137/16M1057577
  • AH16_6Mary Aprahamian and Nicholas J. Higham, "Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms", MIMS EPrint: 2016.4.https://doi.org/10.1137/16M1057577

Settings


This document was generated withDocumenter.jl version 1.8.0 onWednesday 9 July 2025. Using Julia version 1.11.6.


[8]ページ先頭

©2009-2025 Movatter.jp