In addition to (and as part of) its support for multi-dimensional arrays, Julia provides native implementations of many common and useful linear algebra operations which can be loaded withusing LinearAlgebra
. Basic operations, such astr
,det
, andinv
are all supported:
julia> A = [1 2 3; 4 1 6; 7 8 1]3×3 Matrix{Int64}: 1 2 3 4 1 6 7 8 1julia> tr(A)3julia> det(A)104.0julia> inv(A)3×3 Matrix{Float64}: -0.451923 0.211538 0.0865385 0.365385 -0.192308 0.0576923 0.240385 0.0576923 -0.0673077
As well as other useful operations, such as finding eigenvalues or eigenvectors:
julia> A = [-4. -17.; 2. 2.]2×2 Matrix{Float64}: -4.0 -17.0 2.0 2.0julia> eigvals(A)2-element Vector{ComplexF64}: -1.0 - 5.0im -1.0 + 5.0imjulia> eigvecs(A)2×2 Matrix{ComplexF64}: 0.945905-0.0im 0.945905+0.0im -0.166924+0.278207im -0.166924-0.278207im
In addition, Julia provides manyfactorizations which can be used to speed up problems such as linear solve or matrix exponentiation by pre-factorizing a matrix into a form more amenable (for performance or memory reasons) to the problem. See the documentation onfactorize
for more information. As an example:
julia> A = [1.5 2 -4; 3 -1 -6; -10 2.3 4]3×3 Matrix{Float64}: 1.5 2.0 -4.0 3.0 -1.0 -6.0 -10.0 2.3 4.0julia> factorize(A)LU{Float64, Matrix{Float64}, Vector{Int64}}L factor:3×3 Matrix{Float64}: 1.0 0.0 0.0 -0.15 1.0 0.0 -0.3 -0.132196 1.0U factor:3×3 Matrix{Float64}: -10.0 2.3 4.0 0.0 2.345 -3.4 0.0 0.0 -5.24947
SinceA
is not Hermitian, symmetric, triangular, tridiagonal, or bidiagonal, an LU factorization may be the best we can do. Compare with:
julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]3×3 Matrix{Float64}: 1.5 2.0 -4.0 2.0 -1.0 -3.0 -4.0 -3.0 5.0julia> factorize(B)BunchKaufman{Float64, Matrix{Float64}, Vector{Int64}}D factor:3×3 Tridiagonal{Float64, Vector{Float64}}: -1.64286 0.0 ⋅ 0.0 -2.8 0.0 ⋅ 0.0 5.0U factor:3×3 UnitUpperTriangular{Float64, Matrix{Float64}}: 1.0 0.142857 -0.8 ⋅ 1.0 -0.6 ⋅ ⋅ 1.0permutation:3-element Vector{Int64}: 1 2 3
Here, Julia was able to detect thatB
is in fact symmetric, and used a more appropriate factorization. Often it's possible to write more efficient code for a matrix that is known to have certain properties e.g. it is symmetric, or tridiagonal. Julia provides some special types so that you can "tag" matrices as having these properties. For instance:
julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]3×3 Matrix{Float64}: 1.5 2.0 -4.0 2.0 -1.0 -3.0 -4.0 -3.0 5.0julia> sB = Symmetric(B)3×3 Symmetric{Float64, Matrix{Float64}}: 1.5 2.0 -4.0 2.0 -1.0 -3.0 -4.0 -3.0 5.0
sB
has been tagged as a matrix that's (real) symmetric, so for later operations we might perform on it, such as eigenfactorization or computing matrix-vector products, efficiencies can be found by only referencing half of it. For example:
julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]3×3 Matrix{Float64}: 1.5 2.0 -4.0 2.0 -1.0 -3.0 -4.0 -3.0 5.0julia> sB = Symmetric(B)3×3 Symmetric{Float64, Matrix{Float64}}: 1.5 2.0 -4.0 2.0 -1.0 -3.0 -4.0 -3.0 5.0julia> x = [1; 2; 3]3-element Vector{Int64}: 1 2 3julia> sB\x3-element Vector{Float64}: -1.7391304347826084 -1.1086956521739126 -1.4565217391304346
The\
operation here performs the linear solution. The left-division operator is pretty powerful and it's easy to write compact, readable code that is flexible enough to solve all sorts of systems of linear equations.
Matrices with special symmetries and structures arise often in linear algebra and are frequently associated with various matrix factorizations. Julia features a rich collection of special matrix types, which allow for fast computation with specialized routines that are specially developed for particular matrix types.
The following tables summarize the types of special matrices that have been implemented in Julia, as well as whether hooks to various optimized methods for them in LAPACK are available.
Type | Description |
---|---|
Symmetric | Symmetric matrix |
Hermitian | Hermitian matrix |
UpperTriangular | Uppertriangular matrix |
UnitUpperTriangular | Uppertriangular matrix with unit diagonal |
LowerTriangular | Lowertriangular matrix |
UnitLowerTriangular | Lowertriangular matrix with unit diagonal |
UpperHessenberg | UpperHessenberg matrix |
Tridiagonal | Tridiagonal matrix |
SymTridiagonal | Symmetric tridiagonal matrix |
Bidiagonal | Upper/lowerbidiagonal matrix |
Diagonal | Diagonal matrix |
UniformScaling | Uniform scaling operator |
Matrix type | + | - | * | \ | Other functions with optimized methods |
---|---|---|---|---|---|
Symmetric | MV | inv ,sqrt ,cbrt ,exp | |||
Hermitian | MV | inv ,sqrt ,cbrt ,exp | |||
UpperTriangular | MV | MV | inv ,det ,logdet | ||
UnitUpperTriangular | MV | MV | inv ,det ,logdet | ||
LowerTriangular | MV | MV | inv ,det ,logdet | ||
UnitLowerTriangular | MV | MV | inv ,det ,logdet | ||
UpperHessenberg | MM | inv ,det | |||
SymTridiagonal | M | M | MS | MV | eigmax ,eigmin |
Tridiagonal | M | M | MS | MV | |
Bidiagonal | M | M | MS | MV | |
Diagonal | M | M | MV | MV | inv ,det ,logdet ,/ |
UniformScaling | M | M | MVS | MVS | / |
Legend:
Key | Description |
---|---|
M (matrix) | An optimized method for matrix-matrix operations is available |
V (vector) | An optimized method for matrix-vector operations is available |
S (scalar) | An optimized method for matrix-scalar operations is available |
Matrix type | LAPACK | eigen | eigvals | eigvecs | svd | svdvals |
---|---|---|---|---|---|---|
Symmetric | SY | ARI | ||||
Hermitian | HE | ARI | ||||
UpperTriangular | TR | A | A | A | ||
UnitUpperTriangular | TR | A | A | A | ||
LowerTriangular | TR | A | A | A | ||
UnitLowerTriangular | TR | A | A | A | ||
SymTridiagonal | ST | A | ARI | AV | ||
Tridiagonal | GT | |||||
Bidiagonal | BD | A | A | |||
Diagonal | DI | A |
Legend:
Key | Description | Example |
---|---|---|
A (all) | An optimized method to find all the characteristic values and/or vectors is available | e.g.eigvals(M) |
R (range) | An optimized method to find theil th through theih th characteristic values are available | eigvals(M, il, ih) |
I (interval) | An optimized method to find the characteristic values in the interval [vl ,vh ] is available | eigvals(M, vl, vh) |
V (vectors) | An optimized method to find the characteristic vectors corresponding to the characteristic valuesx=[x1, x2,...] is available | eigvecs(M, x) |
AUniformScaling
operator represents a scalar times the identity operator,λ*I
. The identity operatorI
is defined as a constant and is an instance ofUniformScaling
. The size of these operators are generic and match the other matrix in the binary operations+
,-
,*
and\
. ForA+I
andA-I
this means thatA
must be square. Multiplication with the identity operatorI
is a noop (except for checking that the scaling factor is one) and therefore almost without overhead.
To see theUniformScaling
operator in action:
julia> U = UniformScaling(2);julia> a = [1 2; 3 4]2×2 Matrix{Int64}: 1 2 3 4julia> a + U2×2 Matrix{Int64}: 3 2 3 6julia> a * U2×2 Matrix{Int64}: 2 4 6 8julia> [a U]2×4 Matrix{Int64}: 1 2 2 0 3 4 0 2julia> b = [1 2 3; 4 5 6]2×3 Matrix{Int64}: 1 2 3 4 5 6julia> b - UERROR: DimensionMismatch: matrix is not square: dimensions are (2, 3)Stacktrace:[...]
If you need to solve many systems of the form(A+μI)x = b
for the sameA
and differentμ
, it might be beneficial to first compute the Hessenberg factorizationF
ofA
via thehessenberg
function. GivenF
, Julia employs an efficient algorithm for(F+μ*I) \ b
(equivalent to(A+μ*I)x \ b
) and related operations like determinants.
Matrix factorizations (a.k.a. matrix decompositions) compute the factorization of a matrix into a product of matrices, and are one of the central concepts in (numerical) linear algebra.
The following table summarizes the types of matrix factorizations that have been implemented in Julia. Details of their associated methods can be found in theStandard functions section of the Linear Algebra documentation.
Type | Description |
---|---|
BunchKaufman | Bunch-Kaufman factorization |
Cholesky | Cholesky factorization |
CholeskyPivoted | Pivoted Cholesky factorization |
LDLt | LDL(T) factorization |
LU | LU factorization |
QR | QR factorization |
QRCompactWY | Compact WY form of the QR factorization |
QRPivoted | PivotedQR factorization |
LQ | QR factorization oftranspose(A) |
Hessenberg | Hessenberg decomposition |
Eigen | Spectral decomposition |
GeneralizedEigen | Generalized spectral decomposition |
SVD | Singular value decomposition |
GeneralizedSVD | Generalized SVD |
Schur | Schur decomposition |
GeneralizedSchur | Generalized Schur decomposition |
Adjoints and transposes ofFactorization
objects are lazily wrapped inAdjointFactorization
andTransposeFactorization
objects, respectively. Generically, transpose of realFactorization
s are wrapped asAdjointFactorization
.
AbstractQ
)Some matrix factorizations generate orthogonal/unitary "matrix" factors. These factorizations include QR-related factorizations obtained from calls toqr
, i.e.,QR
,QRCompactWY
andQRPivoted
, the Hessenberg factorization obtained from calls tohessenberg
, and the LQ factorization obtained fromlq
. While these orthogonal/unitary factors admit a matrix representation, their internal representation is, for performance and memory reasons, different. Hence, they should be rather viewed as matrix-backed, function-based linear operators. In particular, reading, for instance, a column of its matrix representation requires running "matrix"-vector multiplication code, rather than simply reading out data from memory (possibly filling parts of the vector with structural zeros). Another clear distinction from other, non-triangular matrix types is that the underlying multiplication code allows for in-place modification during multiplication. Furthermore, objects of specificAbstractQ
subtypes as those created viaqr
,hessenberg
andlq
can behave like a square or a rectangular matrix depending on context:
julia> using LinearAlgebrajulia> Q = qr(rand(3,2)).Q3×3 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}julia> Matrix(Q)3×2 Matrix{Float64}: -0.320597 0.865734 -0.765834 -0.475694 -0.557419 0.155628julia> Q*I3×3 Matrix{Float64}: -0.320597 0.865734 -0.384346 -0.765834 -0.475694 -0.432683 -0.557419 0.155628 0.815514julia> Q*ones(2)3-element Vector{Float64}: 0.5451367118802273 -1.241527373086654 -0.40179067589600226julia> Q*ones(3)3-element Vector{Float64}: 0.16079054743832022 -1.674209978965636 0.41372375588835797julia> ones(1,2) * Q'1×3 Matrix{Float64}: 0.545137 -1.24153 -0.401791julia> ones(1,3) * Q'1×3 Matrix{Float64}: 0.160791 -1.67421 0.413724
Due to this distinction from dense or structured matrices, the abstractAbstractQ
type does not subtypeAbstractMatrix
, but instead has its own type hierarchy. Custom types that subtypeAbstractQ
can rely on generic fallbacks if the following interface is satisfied. For example, for
struct MyQ{T} <: LinearAlgebra.AbstractQ{T} # required fieldsend
provide overloads for
Base.size(Q::MyQ) # size of corresponding square matrix representationBase.convert(::Type{AbstractQ{T}}, Q::MyQ) # eltype promotion [optional]LinearAlgebra.lmul!(Q::MyQ, x::AbstractVecOrMat) # left-multiplicationLinearAlgebra.rmul!(A::AbstractMatrix, Q::MyQ) # right-multiplication
Ifeltype
promotion is not of interest, theconvert
method is unnecessary, since by defaultconvert(::Type{AbstractQ{T}}, Q::AbstractQ{T})
returnsQ
itself. Adjoints ofAbstractQ
-typed objects are lazily wrapped in anAdjointQ
wrapper type, which requires its ownLinearAlgebra.lmul!
andLinearAlgebra.rmul!
methods. Given this set of methods, anyQ::MyQ
can be used like a matrix, preferably in a multiplicative context: multiplication via*
with scalars, vectors and matrices from left and right, obtaining a matrix representation ofQ
viaMatrix(Q)
(orQ*I
) and indexing into the matrix representation all work. In contrast, addition and subtraction as well as more generally broadcasting over elements in the matrix representation fail because that would be highly inefficient. For such use cases, consider computing the matrix representation up front and cache it for future reuse.
Several of Julia'smatrix factorizations supportpivoting, which can be used to improve their numerical stability. In fact, some matrix factorizations, such as the LU factorization, may fail without pivoting.
In pivoting, first, apivot element with good numerical properties is chosen based on a pivoting strategy. Next, the rows and columns of the original matrix are permuted to bring the chosen element in place for subsequent computation. Furthermore, the process is repeated for each stage of the factorization.
Consequently, besides the conventional matrix factors, the outputs of pivoted factorization schemes also include permutation matrices.
In the following, the pivoting strategies implemented in Julia are briefly described. Note that not all matrix factorizations may support them. Consult the documentation of the respectivematrix factorization for details on the supported pivoting strategies.
See alsoLinearAlgebra.ZeroPivotException
.
LinearAlgebra.NoPivot
—TypeNoPivot
Pivoting is not performed. Matrix factorizations such as the LU factorization may fail without pivoting, and may also be numerically unstable for floating-point matrices in the face of roundoff error. This pivot strategy is mainly useful for pedagogical purposes.
LinearAlgebra.RowNonZero
—TypeRowNonZero
First non-zero element in the remaining rows is chosen as the pivot element.
Beware that for floating-point matrices, the resulting LU algorithm is numerically unstable — this strategy is mainly useful for comparison to hand calculations (which typically use this strategy) or for other algebraic types (e.g. rational numbers) not susceptible to roundoff errors. Otherwise, the defaultRowMaximum
pivoting strategy should be generally preferred in Gaussian elimination.
Note that theelement type of the matrix must admit aniszero
method.
LinearAlgebra.RowMaximum
—TypeRowMaximum
The maximum-magnitude element in the remaining rows is chosen as the pivot element. This is the default strategy for LU factorization of floating-point matrices, and is sometimes referred to as the "partial pivoting" algorithm.
Note that theelement type of the matrix must admit anabs
method, whose result type must admit a<
method.
LinearAlgebra.ColumnNorm
—TypeColumnNorm
The column with the maximum norm is used for subsequent computation. This is used for pivoted QR factorization.
Note that theelement type of the matrix must admitnorm
andabs
methods, whose respective result types must admit a<
method.
Linear algebra functions in Julia are largely implemented by calling functions fromLAPACK. Sparse matrix factorizations call functions fromSuiteSparse. Other sparse solvers are available as Julia packages.
Base.:*
—Method*(A::AbstractMatrix, B::AbstractMatrix)
Matrix multiplication.
Examples
julia> [1 1; 0 1] * [1 0; 1 1]2×2 Matrix{Int64}: 2 1 1 1
Base.:*
—Method*(A, B::AbstractMatrix, C)A * B * C * D
Chained multiplication of 3 or 4 matrices is done in the most efficient sequence, based on the sizes of the arrays. That is, the number of scalar multiplications needed for(A * B) * C
(with 3 dense matrices) is compared to that forA * (B * C)
to choose which of these to execute.
If the last factor is a vector, or the first a transposed vector, then it is efficient to deal with these first. In particularx' * B * y
means(x' * B) * y
for an ordinary column-majorB::Matrix
. Unlikedot(x, B, y)
, this allocates an intermediate array.
If the first or last factor is a number, this will be fused with the matrix multiplication, using 5-argmul!
.
These optimisations require at least Julia 1.7.
Base.:\
—Method\(A, B)
Matrix division using a polyalgorithm. For input matricesA
andB
, the resultX
is such thatA*X == B
whenA
is square. The solver that is used depends upon the structure ofA
. IfA
is upper or lower triangular (or diagonal), no factorization ofA
is required and the system is solved with either forward or backward substitution. For non-triangular square matrices, an LU factorization is used.
For rectangularA
the result is the minimum-norm least squares solution computed by a pivoted QR factorization ofA
and a rank estimate ofA
based on the R factor.
WhenA
is sparse, a similar polyalgorithm is used. For indefinite matrices, theLDLt
factorization does not use pivoting during the numerical factorization and therefore the procedure can fail even for invertible matrices.
Examples
julia> A = [1 0; 1 -2]; B = [32; -4];julia> X = A \ B2-element Vector{Float64}: 32.0 18.0julia> A * X == Btrue
Base.:/
—MethodA / B
Matrix right-division:A / B
is equivalent to(B' \ A')'
where\
is the left-division operator. For square matrices, the resultX
is such thatA == X*B
.
See also:rdiv!
.
Examples
julia> A = Float64[1 4 5; 3 9 2]; B = Float64[1 4 2; 3 4 2; 8 7 1];julia> X = A / B2×3 Matrix{Float64}: -0.65 3.75 -1.2 3.25 -2.75 1.0julia> isapprox(A, X*B)truejulia> isapprox(X, A*pinv(B))true
LinearAlgebra.SingularException
—TypeSingularException
Exception thrown when the input matrix has one or more zero-valued eigenvalues, and is not invertible. A linear solve involving such a matrix cannot be computed. Theinfo
field indicates the location of (one of) the singular value(s).
LinearAlgebra.PosDefException
—TypePosDefException
Exception thrown when the input matrix was notpositive definite. Some linear algebra functions and factorizations are only applicable to positive definite matrices. Theinfo
field indicates the location of (one of) the eigenvalue(s) which is (are) less than/equal to 0.
LinearAlgebra.ZeroPivotException
—TypeZeroPivotException <: Exception
Exception thrown when a matrix factorization/solve encounters a zero in a pivot (diagonal) position and cannot proceed. This maynot mean that the matrix is singular: it may be fruitful to switch to a different factorization such as pivoted LU that can re-order variables to eliminate spurious zero pivots. Theinfo
field indicates the location of (one of) the zero pivot(s).
LinearAlgebra.RankDeficientException
—TypeRankDeficientException
Exception thrown when the input matrix isrank deficient. Some linear algebra functions, such as the Cholesky decomposition, are only applicable to matrices that are not rank deficient. Theinfo
field indicates the computed rank of the matrix.
LinearAlgebra.LAPACKException
—TypeLAPACKException
Generic LAPACK exception thrown either during direct calls to theLAPACK functions or during calls to other functions that use the LAPACK functions internally but lack specialized error handling. Theinfo
field contains additional information on the underlying error and depends on the LAPACK function that was invoked.
LinearAlgebra.dot
—Functiondot(x, y)x ⋅ y
Compute the dot product between two vectors. For complex vectors, the first vector is conjugated.
dot
also works on arbitrary iterable objects, including arrays of any dimension, as long asdot
is defined on the elements.
dot
is semantically equivalent tosum(dot(vx,vy) for (vx,vy) in zip(x, y))
, with the added restriction that the arguments must have equal lengths.
x ⋅ y
(where⋅
can be typed by tab-completing\cdot
in the REPL) is a synonym fordot(x, y)
.
Examples
julia> dot([1; 1], [2; 3])5julia> dot([im; im], [1; 1])0 - 2imjulia> dot(1:5, 2:6)70julia> x = fill(2., (5,5));julia> y = fill(3., (5,5));julia> dot(x, y)150.0
LinearAlgebra.dot
—Methoddot(x, A, y)
Compute the generalized dot productdot(x, A*y)
between two vectorsx
andy
, without storing the intermediate result ofA*y
. As for the two-argumentdot(_,_)
, this acts recursively. Moreover, for complex vectors, the first vector is conjugated.
Three-argumentdot
requires at least Julia 1.4.
Examples
julia> dot([1; 1], [1 2; 3 4], [2; 3])26julia> dot(1:5, reshape(1:25, 5, 5), 2:6)4850julia> ⋅(1:5, reshape(1:25, 5, 5), 2:6) == dot(1:5, reshape(1:25, 5, 5), 2:6)true
LinearAlgebra.cross
—Functioncross(x, y)×(x,y)
Compute the cross product of two 3-vectors.
Examples
julia> a = [0;1;0]3-element Vector{Int64}: 0 1 0julia> b = [0;0;1]3-element Vector{Int64}: 0 0 1julia> cross(a,b)3-element Vector{Int64}: 1 0 0
LinearAlgebra.axpy!
—Functionaxpy!(α, x::AbstractArray, y::AbstractArray)
Overwritey
withx * α + y
and returny
. Ifx
andy
have the same axes, it's equivalent withy .+= x .* a
.
Examples
julia> x = [1; 2; 3];julia> y = [4; 5; 6];julia> axpy!(2, x, y)3-element Vector{Int64}: 6 9 12
LinearAlgebra.axpby!
—Functionaxpby!(α, x::AbstractArray, β, y::AbstractArray)
Overwritey
withx * α + y * β
and returny
. Ifx
andy
have the same axes, it's equivalent withy .= x .* a .+ y .* β
.
Examples
julia> x = [1; 2; 3];julia> y = [4; 5; 6];julia> axpby!(2, x, 2, y)3-element Vector{Int64}: 10 14 18
LinearAlgebra.rotate!
—Functionrotate!(x, y, c, s)
Overwritex
withc*x + s*y
andy
with-conj(s)*x + c*y
. Returnsx
andy
.
rotate!
requires at least Julia 1.5.
LinearAlgebra.reflect!
—Functionreflect!(x, y, c, s)
Overwritex
withc*x + s*y
andy
withconj(s)*x - c*y
. Returnsx
andy
.
reflect!
requires at least Julia 1.5.
LinearAlgebra.factorize
—Functionfactorize(A)
Compute a convenient factorization ofA
, based upon the type of the input matrix.factorize
checksA
to see if it is symmetric/triangular/etc. ifA
is passed as a generic matrix.factorize
checks every element ofA
to verify/rule out each property. It will short-circuit as soon as it can rule out symmetry/triangular structure. The return value can be reused for efficient solving of multiple systems. For example:A=factorize(A); x=A\b; y=A\C
.
Properties ofA | type of factorization |
---|---|
Positive-definite | Cholesky (seecholesky ) |
Dense Symmetric/Hermitian | Bunch-Kaufman (seebunchkaufman ) |
Sparse Symmetric/Hermitian | LDLt (seeldlt ) |
Triangular | Triangular |
Diagonal | Diagonal |
Bidiagonal | Bidiagonal |
Tridiagonal | LU (seelu ) |
Symmetric real tridiagonal | LDLt (seeldlt ) |
General square | LU (seelu ) |
General non-square | QR (seeqr ) |
Iffactorize
is called on a Hermitian positive-definite matrix, for instance, thenfactorize
will return a Cholesky factorization.
Examples
julia> A = Array(Bidiagonal(fill(1.0, (5, 5)), :U))5×5 Matrix{Float64}: 1.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0julia> factorize(A) # factorize will check to see that A is already factorized5×5 Bidiagonal{Float64, Vector{Float64}}: 1.0 1.0 ⋅ ⋅ ⋅ ⋅ 1.0 1.0 ⋅ ⋅ ⋅ ⋅ 1.0 1.0 ⋅ ⋅ ⋅ ⋅ 1.0 1.0 ⋅ ⋅ ⋅ ⋅ 1.0
This returns a5×5 Bidiagonal{Float64}
, which can now be passed to other linear algebra functions (e.g. eigensolvers) which will use specialized methods forBidiagonal
types.
LinearAlgebra.Diagonal
—TypeDiagonal(V::AbstractVector)
Construct a lazy matrix withV
as its diagonal.
See alsoUniformScaling
for the lazy identity matrixI
,diagm
to make a dense matrix, anddiag
to extract diagonal elements.
Examples
julia> d = Diagonal([1, 10, 100])3×3 Diagonal{Int64, Vector{Int64}}: 1 ⋅ ⋅ ⋅ 10 ⋅ ⋅ ⋅ 100julia> diagm([7, 13])2×2 Matrix{Int64}: 7 0 0 13julia> ans + I2×2 Matrix{Int64}: 8 0 0 14julia> I(2)2×2 Diagonal{Bool, Vector{Bool}}: 1 ⋅ ⋅ 1
A one-column matrix is not treated like a vector, but instead calls the methodDiagonal(A::AbstractMatrix)
which extracts 1-elementdiag(A)
:
julia> A = transpose([7.0 13.0])2×1 transpose(::Matrix{Float64}) with eltype Float64: 7.0 13.0julia> Diagonal(A)1×1 Diagonal{Float64, Vector{Float64}}: 7.0
Diagonal(A::AbstractMatrix)
Construct a matrix from the principal diagonal ofA
. The input matrixA
may be rectangular, but the output will be square.
Examples
julia> A = [1 2; 3 4]2×2 Matrix{Int64}: 1 2 3 4julia> D = Diagonal(A)2×2 Diagonal{Int64, Vector{Int64}}: 1 ⋅ ⋅ 4julia> A = [1 2 3; 4 5 6]2×3 Matrix{Int64}: 1 2 3 4 5 6julia> Diagonal(A)2×2 Diagonal{Int64, Vector{Int64}}: 1 ⋅ ⋅ 5
Diagonal{T}(undef, n)
Construct an uninitializedDiagonal{T}
of lengthn
. Seeundef
.
LinearAlgebra.Bidiagonal
—TypeBidiagonal(dv::V, ev::V, uplo::Symbol) where V <: AbstractVector
Constructs an upper (uplo=:U
) or lower (uplo=:L
) bidiagonal matrix using the given diagonal (dv
) and off-diagonal (ev
) vectors. The result is of typeBidiagonal
and provides efficient specialized linear solvers, but may be converted into a regular matrix withconvert(Array, _)
(orArray(_)
for short). The length ofev
must be one less than the length ofdv
.
Examples
julia> dv = [1, 2, 3, 4]4-element Vector{Int64}: 1 2 3 4julia> ev = [7, 8, 9]3-element Vector{Int64}: 7 8 9julia> Bu = Bidiagonal(dv, ev, :U) # ev is on the first superdiagonal4×4 Bidiagonal{Int64, Vector{Int64}}: 1 7 ⋅ ⋅ ⋅ 2 8 ⋅ ⋅ ⋅ 3 9 ⋅ ⋅ ⋅ 4julia> Bl = Bidiagonal(dv, ev, :L) # ev is on the first subdiagonal4×4 Bidiagonal{Int64, Vector{Int64}}: 1 ⋅ ⋅ ⋅ 7 2 ⋅ ⋅ ⋅ 8 3 ⋅ ⋅ ⋅ 9 4
Bidiagonal(A, uplo::Symbol)
Construct aBidiagonal
matrix from the main diagonal ofA
and its first super- (ifuplo=:U
) or sub-diagonal (ifuplo=:L
).
Examples
julia> A = [1 1 1 1; 2 2 2 2; 3 3 3 3; 4 4 4 4]4×4 Matrix{Int64}: 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4julia> Bidiagonal(A, :U) # contains the main diagonal and first superdiagonal of A4×4 Bidiagonal{Int64, Vector{Int64}}: 1 1 ⋅ ⋅ ⋅ 2 2 ⋅ ⋅ ⋅ 3 3 ⋅ ⋅ ⋅ 4julia> Bidiagonal(A, :L) # contains the main diagonal and first subdiagonal of A4×4 Bidiagonal{Int64, Vector{Int64}}: 1 ⋅ ⋅ ⋅ 2 2 ⋅ ⋅ ⋅ 3 3 ⋅ ⋅ ⋅ 4 4
LinearAlgebra.SymTridiagonal
—TypeSymTridiagonal(dv::V, ev::V) where V <: AbstractVector
Construct a symmetric tridiagonal matrix from the diagonal (dv
) and first sub/super-diagonal (ev
), respectively. The result is of typeSymTridiagonal
and provides efficient specialized eigensolvers, but may be converted into a regular matrix withconvert(Array, _)
(orArray(_)
for short).
ForSymTridiagonal
block matrices, the elements ofdv
are symmetrized. The argumentev
is interpreted as the superdiagonal. Blocks from the subdiagonal are (materialized) transpose of the corresponding superdiagonal blocks.
Examples
julia> dv = [1, 2, 3, 4]4-element Vector{Int64}: 1 2 3 4julia> ev = [7, 8, 9]3-element Vector{Int64}: 7 8 9julia> SymTridiagonal(dv, ev)4×4 SymTridiagonal{Int64, Vector{Int64}}: 1 7 ⋅ ⋅ 7 2 8 ⋅ ⋅ 8 3 9 ⋅ ⋅ 9 4julia> A = SymTridiagonal(fill([1 2; 3 4], 3), fill([1 2; 3 4], 2));julia> A[1,1]2×2 Symmetric{Int64, Matrix{Int64}}: 1 2 2 4julia> A[1,2]2×2 Matrix{Int64}: 1 2 3 4julia> A[2,1]2×2 Matrix{Int64}: 1 3 2 4
SymTridiagonal(A::AbstractMatrix)
Construct a symmetric tridiagonal matrix from the diagonal and first superdiagonal of the symmetric matrixA
.
Examples
julia> A = [1 2 3; 2 4 5; 3 5 6]3×3 Matrix{Int64}: 1 2 3 2 4 5 3 5 6julia> SymTridiagonal(A)3×3 SymTridiagonal{Int64, Vector{Int64}}: 1 2 ⋅ 2 4 5 ⋅ 5 6julia> B = reshape([[1 2; 2 3], [1 2; 3 4], [1 3; 2 4], [1 2; 2 3]], 2, 2);julia> SymTridiagonal(B)2×2 SymTridiagonal{Matrix{Int64}, Vector{Matrix{Int64}}}: [1 2; 2 3] [1 3; 2 4] [1 2; 3 4] [1 2; 2 3]
LinearAlgebra.Tridiagonal
—TypeTridiagonal(dl::V, d::V, du::V) where V <: AbstractVector
Construct a tridiagonal matrix from the first subdiagonal, diagonal, and first superdiagonal, respectively. The result is of typeTridiagonal
and provides efficient specialized linear solvers, but may be converted into a regular matrix withconvert(Array, _)
(orArray(_)
for short). The lengths ofdl
anddu
must be one less than the length ofd
.
The subdiagonaldl
and the superdiagonaldu
must not be aliased to each other. If aliasing is detected, the constructor will use a copy ofdu
as its argument.
Examples
julia> dl = [1, 2, 3];julia> du = [4, 5, 6];julia> d = [7, 8, 9, 0];julia> Tridiagonal(dl, d, du)4×4 Tridiagonal{Int64, Vector{Int64}}: 7 4 ⋅ ⋅ 1 8 5 ⋅ ⋅ 2 9 6 ⋅ ⋅ 3 0
Tridiagonal(A)
Construct a tridiagonal matrix from the first sub-diagonal, diagonal and first super-diagonal of the matrixA
.
Examples
julia> A = [1 2 3 4; 1 2 3 4; 1 2 3 4; 1 2 3 4]4×4 Matrix{Int64}: 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4julia> Tridiagonal(A)4×4 Tridiagonal{Int64, Vector{Int64}}: 1 2 ⋅ ⋅ 1 2 3 ⋅ ⋅ 2 3 4 ⋅ ⋅ 3 4
LinearAlgebra.Symmetric
—TypeSymmetric(A::AbstractMatrix, uplo::Symbol=:U)
Construct aSymmetric
view of the upper (ifuplo = :U
) or lower (ifuplo = :L
) triangle of the matrixA
.
Symmetric
views are mainly useful for real-symmetric matrices, for which specialized algorithms (e.g. for eigenproblems) are enabled forSymmetric
types. More generally, see alsoHermitian(A)
for Hermitian matricesA == A'
, which is effectively equivalent toSymmetric
for real matrices but is also useful for complex matrices. (Whereas complexSymmetric
matrices are supported but have few if any specialized algorithms.)
To compute the symmetric part of a real matrix, or more generally the Hermitian part(A + A') / 2
of a real or complex matrixA
, usehermitianpart
.
Examples
julia> A = [1 2 3; 4 5 6; 7 8 9]3×3 Matrix{Int64}: 1 2 3 4 5 6 7 8 9julia> Supper = Symmetric(A)3×3 Symmetric{Int64, Matrix{Int64}}: 1 2 3 2 5 6 3 6 9julia> Slower = Symmetric(A, :L)3×3 Symmetric{Int64, Matrix{Int64}}: 1 4 7 4 5 8 7 8 9julia> hermitianpart(A)3×3 Hermitian{Float64, Matrix{Float64}}: 1.0 3.0 5.0 3.0 5.0 7.0 5.0 7.0 9.0
Note thatSupper
will not be equal toSlower
unlessA
is itself symmetric (e.g. ifA == transpose(A)
).
LinearAlgebra.Hermitian
—TypeHermitian(A::AbstractMatrix, uplo::Symbol=:U)
Construct aHermitian
view of the upper (ifuplo = :U
) or lower (ifuplo = :L
) triangle of the matrixA
.
To compute the Hermitian part ofA
, usehermitianpart
.
Examples
julia> A = [1 2+2im 3-3im; 4 5 6-6im; 7 8+8im 9]3×3 Matrix{Complex{Int64}}: 1+0im 2+2im 3-3im 4+0im 5+0im 6-6im 7+0im 8+8im 9+0imjulia> Hupper = Hermitian(A)3×3 Hermitian{Complex{Int64}, Matrix{Complex{Int64}}}: 1+0im 2+2im 3-3im 2-2im 5+0im 6-6im 3+3im 6+6im 9+0imjulia> Hlower = Hermitian(A, :L)3×3 Hermitian{Complex{Int64}, Matrix{Complex{Int64}}}: 1+0im 4+0im 7+0im 4+0im 5+0im 8-8im 7+0im 8+8im 9+0imjulia> hermitianpart(A)3×3 Hermitian{ComplexF64, Matrix{ComplexF64}}: 1.0+0.0im 3.0+1.0im 5.0-1.5im 3.0-1.0im 5.0+0.0im 7.0-7.0im 5.0+1.5im 7.0+7.0im 9.0+0.0im
Note thatHupper
will not be equal toHlower
unlessA
is itself Hermitian (e.g. ifA == adjoint(A)
).
All non-real parts of the diagonal will be ignored.
Hermitian(fill(complex(1,1), 1, 1)) == fill(1, 1, 1)
LinearAlgebra.LowerTriangular
—TypeLowerTriangular(A::AbstractMatrix)
Construct aLowerTriangular
view of the matrixA
.
Examples
julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]3×3 Matrix{Float64}: 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0julia> LowerTriangular(A)3×3 LowerTriangular{Float64, Matrix{Float64}}: 1.0 ⋅ ⋅ 4.0 5.0 ⋅ 7.0 8.0 9.0
LinearAlgebra.UpperTriangular
—TypeUpperTriangular(A::AbstractMatrix)
Construct anUpperTriangular
view of the matrixA
.
Examples
julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]3×3 Matrix{Float64}: 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0julia> UpperTriangular(A)3×3 UpperTriangular{Float64, Matrix{Float64}}: 1.0 2.0 3.0 ⋅ 5.0 6.0 ⋅ ⋅ 9.0
LinearAlgebra.UnitLowerTriangular
—TypeUnitLowerTriangular(A::AbstractMatrix)
Construct aUnitLowerTriangular
view of the matrixA
. Such a view has theoneunit
of theeltype
ofA
on its diagonal.
Examples
julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]3×3 Matrix{Float64}: 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0julia> UnitLowerTriangular(A)3×3 UnitLowerTriangular{Float64, Matrix{Float64}}: 1.0 ⋅ ⋅ 4.0 1.0 ⋅ 7.0 8.0 1.0
LinearAlgebra.UnitUpperTriangular
—TypeUnitUpperTriangular(A::AbstractMatrix)
Construct anUnitUpperTriangular
view of the matrixA
. Such a view has theoneunit
of theeltype
ofA
on its diagonal.
Examples
julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]3×3 Matrix{Float64}: 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0julia> UnitUpperTriangular(A)3×3 UnitUpperTriangular{Float64, Matrix{Float64}}: 1.0 2.0 3.0 ⋅ 1.0 6.0 ⋅ ⋅ 1.0
LinearAlgebra.UpperHessenberg
—TypeUpperHessenberg(A::AbstractMatrix)
Construct anUpperHessenberg
view of the matrixA
. Entries ofA
below the first subdiagonal are ignored.
This type was added in Julia 1.3.
Efficient algorithms are implemented forH \ b
,det(H)
, and similar.
See also thehessenberg
function to factor any matrix into a similar upper-Hessenberg matrix.
IfF::Hessenberg
is the factorization object, the unitary matrix can be accessed withF.Q
and the Hessenberg matrix withF.H
. WhenQ
is extracted, the resulting type is theHessenbergQ
object, and may be converted to a regular matrix withconvert(Array, _)
(orArray(_)
for short).
Iterating the decomposition produces the factorsF.Q
andF.H
.
Examples
julia> A = [1 2 3 4; 5 6 7 8; 9 10 11 12; 13 14 15 16]4×4 Matrix{Int64}: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16julia> UpperHessenberg(A)4×4 UpperHessenberg{Int64, Matrix{Int64}}: 1 2 3 4 5 6 7 8 ⋅ 10 11 12 ⋅ ⋅ 15 16
LinearAlgebra.UniformScaling
—TypeUniformScaling{T<:Number}
Generically sized uniform scaling operator defined as a scalar times the identity operator,λ*I
. Although without an explicitsize
, it acts similarly to a matrix in many cases and includes support for some indexing. See alsoI
.
Indexing using ranges is available as of Julia 1.6.
Examples
julia> J = UniformScaling(2.)UniformScaling{Float64}2.0*Ijulia> A = [1. 2.; 3. 4.]2×2 Matrix{Float64}: 1.0 2.0 3.0 4.0julia> J*A2×2 Matrix{Float64}: 2.0 4.0 6.0 8.0julia> J[1:2, 1:2]2×2 Matrix{Float64}: 2.0 0.0 0.0 2.0
LinearAlgebra.I
—ConstantI
An object of typeUniformScaling
, representing an identity matrix of any size.
Examples
julia> fill(1, (5,6)) * I == fill(1, (5,6))truejulia> [1 2im 3; 1im 2 3] * I2×3 Matrix{Complex{Int64}}: 1+0im 0+2im 3+0im 0+1im 2+0im 3+0im
LinearAlgebra.UniformScaling
—Method(I::UniformScaling)(n::Integer)
Construct aDiagonal
matrix from aUniformScaling
.
This method is available as of Julia 1.2.
Examples
julia> I(3)3×3 Diagonal{Bool, Vector{Bool}}: 1 ⋅ ⋅ ⋅ 1 ⋅ ⋅ ⋅ 1julia> (0.7*I)(3)3×3 Diagonal{Float64, Vector{Float64}}: 0.7 ⋅ ⋅ ⋅ 0.7 ⋅ ⋅ ⋅ 0.7
LinearAlgebra.Factorization
—TypeLinearAlgebra.Factorization
Abstract type formatrix factorizations a.k.a. matrix decompositions. Seeonline documentation for a list of available matrix factorizations.
LinearAlgebra.LU
—TypeLU <: Factorization
Matrix factorization type of theLU
factorization of a square matrixA
. This is the return type oflu
, the corresponding matrix factorization function.
The individual components of the factorizationF::LU
can be accessed viagetproperty
:
Component | Description |
---|---|
F.L | L (unit lower triangular) part ofLU |
F.U | U (upper triangular) part ofLU |
F.p | (right) permutationVector |
F.P | (right) permutationMatrix |
Iterating the factorization produces the componentsF.L
,F.U
, andF.p
.
Examples
julia> A = [4 3; 6 3]2×2 Matrix{Int64}: 4 3 6 3julia> F = lu(A)LU{Float64, Matrix{Float64}, Vector{Int64}}L factor:2×2 Matrix{Float64}: 1.0 0.0 0.666667 1.0U factor:2×2 Matrix{Float64}: 6.0 3.0 0.0 1.0julia> F.L * F.U == A[F.p, :]truejulia> l, u, p = lu(A); # destructuring via iterationjulia> l == F.L && u == F.U && p == F.ptrue
LinearAlgebra.lu
—Functionlu(A::AbstractSparseMatrixCSC; check = true, q = nothing, control = get_umfpack_control()) -> F::UmfpackLU
Compute the LU factorization of a sparse matrixA
.
For sparseA
with real or complex element type, the return type ofF
isUmfpackLU{Tv, Ti}
, withTv
=Float64
orComplexF64
respectively andTi
is an integer type (Int32
orInt64
).
Whencheck = true
, an error is thrown if the decomposition fails. Whencheck = false
, responsibility for checking the decomposition's validity (viaissuccess
) lies with the user.
The permutationq
can either be a permutation vector ornothing
. If no permutation vector is provided orq
isnothing
, UMFPACK's default is used. If the permutation is not zero-based, a zero-based copy is made.
Thecontrol
vector defaults to the Julia SparseArrays package's default configuration for UMFPACK (NB: this is modified from the UMFPACK defaults to disable iterative refinement), but can be changed by passing a vector of lengthUMFPACK_CONTROL
, see the UMFPACK manual for possible configurations. For example to reenable iterative refinement:
umfpack_control = SparseArrays.UMFPACK.get_umfpack_control(Float64, Int64) # read Julia default configuration for a Float64 sparse matrixSparseArrays.UMFPACK.show_umf_ctrl(umfpack_control) # optional - display valuesumfpack_control[SparseArrays.UMFPACK.JL_UMFPACK_IRSTEP] = 2.0 # reenable iterative refinement (2 is UMFPACK default max iterative refinement steps)Alu = lu(A; control = umfpack_control)x = Alu \ b # solve Ax = b, including UMFPACK iterative refinement
The individual components of the factorizationF
can be accessed by indexing:
Component | Description |
---|---|
L | L (lower triangular) part ofLU |
U | U (upper triangular) part ofLU |
p | right permutationVector |
q | left permutationVector |
Rs | Vector of scaling factors |
: | (L,U,p,q,Rs) components |
The relation betweenF
andA
is
F.L*F.U == (F.Rs .* A)[F.p, F.q]
F
further supports the following functions:
See alsolu!
lu(A::AbstractSparseMatrixCSC)
uses the UMFPACK[ACM832] library that is part ofSuiteSparse. As this library only supports sparse matrices withFloat64
orComplexF64
elements,lu
convertsA
into a copy that is of typeSparseMatrixCSC{Float64}
orSparseMatrixCSC{ComplexF64}
as appropriate.
lu(A, pivot = RowMaximum(); check = true, allowsingular = false) -> F::LU
Compute the LU factorization ofA
.
Whencheck = true
, an error is thrown if the decomposition fails. Whencheck = false
, responsibility for checking the decomposition's validity (viaissuccess
) lies with the user.
By default, withcheck = true
, an error is also thrown when the decomposition produces valid factors, but the upper-triangular factorU
is rank-deficient. This may be changed by passingallowsingular = true
.
In most cases, ifA
is a subtypeS
ofAbstractMatrix{T}
with an element typeT
supporting+
,-
,*
and/
, the return type isLU{T,S{T}}
.
In general, LU factorization involves a permutation of the rows of the matrix (corresponding to theF.p
output described below), known as "pivoting" (because it corresponds to choosing which row contains the "pivot", the diagonal entry ofF.U
). One of the following pivoting strategies can be selected via the optionalpivot
argument:
RowMaximum()
(default): the standard pivoting strategy; the pivot corresponds to the element of maximum absolute value among the remaining, to be factorized rows. This pivoting strategy requires the element type to also supportabs
and<
. (This is generally the only numerically stable option for floating-point matrices.)RowNonZero()
: the pivot corresponds to the first non-zero element among the remaining, to be factorized rows. (This corresponds to the typical choice in hand calculations, and is also useful for more general algebraic number types that supportiszero
but notabs
or<
.)NoPivot()
: pivoting turned off (will fail if a zero entry is encountered in a pivot position, even whenallowsingular = true
).The individual components of the factorizationF
can be accessed viagetproperty
:
Component | Description |
---|---|
F.L | L (lower triangular) part ofLU |
F.U | U (upper triangular) part ofLU |
F.p | (right) permutationVector |
F.P | (right) permutationMatrix |
Iterating the factorization produces the componentsF.L
,F.U
, andF.p
.
The relationship betweenF
andA
is
F.L*F.U == A[F.p, :]
F
further supports the following functions:
Supported function | LU | LU{T,Tridiagonal{T}} |
---|---|---|
/ | ✓ | |
\ | ✓ | ✓ |
inv | ✓ | ✓ |
det | ✓ | ✓ |
logdet | ✓ | ✓ |
logabsdet | ✓ | ✓ |
size | ✓ | ✓ |
Theallowsingular
keyword argument was added in Julia 1.11.
Examples
julia> A = [4 3; 6 3]2×2 Matrix{Int64}: 4 3 6 3julia> F = lu(A)LU{Float64, Matrix{Float64}, Vector{Int64}}L factor:2×2 Matrix{Float64}: 1.0 0.0 0.666667 1.0U factor:2×2 Matrix{Float64}: 6.0 3.0 0.0 1.0julia> F.L * F.U == A[F.p, :]truejulia> l, u, p = lu(A); # destructuring via iterationjulia> l == F.L && u == F.U && p == F.ptruejulia> lu([1 2; 1 2], allowsingular = true)LU{Float64, Matrix{Float64}, Vector{Int64}}L factor:2×2 Matrix{Float64}: 1.0 0.0 1.0 1.0U factor (rank-deficient):2×2 Matrix{Float64}: 1.0 2.0 0.0 0.0
LinearAlgebra.lu!
—Functionlu!(F::UmfpackLU, A::AbstractSparseMatrixCSC; check=true, reuse_symbolic=true, q=nothing) -> F::UmfpackLU
Compute the LU factorization of a sparse matrixA
, reusing the symbolic factorization of an already existing LU factorization stored inF
. Unlessreuse_symbolic
is set to false, the sparse matrixA
must have an identical nonzero pattern as the matrix used to create the LU factorizationF
, otherwise an error is thrown. If the size ofA
andF
differ, all vectors will be resized accordingly.
Whencheck = true
, an error is thrown if the decomposition fails. Whencheck = false
, responsibility for checking the decomposition's validity (viaissuccess
) lies with the user.
The permutationq
can either be a permutation vector ornothing
. If no permutation vector is provided orq
isnothing
, UMFPACK's default is used. If the permutation is not zero based, a zero based copy is made.
See alsolu
lu!(F::UmfpackLU, A::AbstractSparseMatrixCSC)
uses the UMFPACK library that is part of SuiteSparse. As this library only supports sparse matrices withFloat64
orComplexF64
elements,lu!
will automatically convert the types to those set by the LU factorization orSparseMatrixCSC{ComplexF64}
as appropriate.
lu!
forUmfpackLU
requires at least Julia 1.5.
Examples
julia> A = sparse(Float64[1.0 2.0; 0.0 3.0]);julia> F = lu(A);julia> B = sparse(Float64[1.0 1.0; 0.0 1.0]);julia> lu!(F, B);julia> F \ ones(2)2-element Vector{Float64}: 0.0 1.0
lu!(A, pivot = RowMaximum(); check = true, allowsingular = false) -> LU
lu!
is the same aslu
, but saves space by overwriting the inputA
, instead of creating a copy. AnInexactError
exception is thrown if the factorization produces a number not representable by the element type ofA
, e.g. for integer types.
Theallowsingular
keyword argument was added in Julia 1.11.
Examples
julia> A = [4. 3.; 6. 3.]2×2 Matrix{Float64}: 4.0 3.0 6.0 3.0julia> F = lu!(A)LU{Float64, Matrix{Float64}, Vector{Int64}}L factor:2×2 Matrix{Float64}: 1.0 0.0 0.666667 1.0U factor:2×2 Matrix{Float64}: 6.0 3.0 0.0 1.0julia> iA = [4 3; 6 3]2×2 Matrix{Int64}: 4 3 6 3julia> lu!(iA)ERROR: InexactError: Int64(0.6666666666666666)Stacktrace:[...]
LinearAlgebra.Cholesky
—TypeCholesky <: Factorization
Matrix factorization type of the Cholesky factorization of a dense symmetric/Hermitian positive definite matrixA
. This is the return type ofcholesky
, the corresponding matrix factorization function.
The triangular Cholesky factor can be obtained from the factorizationF::Cholesky
viaF.L
andF.U
, whereA ≈ F.U' * F.U ≈ F.L * F.L'
.
The following functions are available forCholesky
objects:size
,\
,inv
,det
,logdet
andisposdef
.
Iterating the decomposition produces the componentsL
andU
.
Examples
julia> A = [4. 12. -16.; 12. 37. -43.; -16. -43. 98.]3×3 Matrix{Float64}: 4.0 12.0 -16.0 12.0 37.0 -43.0 -16.0 -43.0 98.0julia> C = cholesky(A)Cholesky{Float64, Matrix{Float64}}U factor:3×3 UpperTriangular{Float64, Matrix{Float64}}: 2.0 6.0 -8.0 ⋅ 1.0 5.0 ⋅ ⋅ 3.0julia> C.U3×3 UpperTriangular{Float64, Matrix{Float64}}: 2.0 6.0 -8.0 ⋅ 1.0 5.0 ⋅ ⋅ 3.0julia> C.L3×3 LowerTriangular{Float64, Matrix{Float64}}: 2.0 ⋅ ⋅ 6.0 1.0 ⋅ -8.0 5.0 3.0julia> C.L * C.U == Atruejulia> l, u = C; # destructuring via iterationjulia> l == C.L && u == C.Utrue
LinearAlgebra.CholeskyPivoted
—TypeCholeskyPivoted
Matrix factorization type of the pivoted Cholesky factorization of a dense symmetric/Hermitian positive semi-definite matrixA
. This is the return type ofcholesky(_, ::RowMaximum)
, the corresponding matrix factorization function.
The triangular Cholesky factor can be obtained from the factorizationF::CholeskyPivoted
viaF.L
andF.U
, and the permutation viaF.p
, whereA[F.p, F.p] ≈ Ur' * Ur ≈ Lr * Lr'
withUr = F.U[1:F.rank, :]
andLr = F.L[:, 1:F.rank]
, or alternativelyA ≈ Up' * Up ≈ Lp * Lp'
withUp = F.U[1:F.rank, invperm(F.p)]
andLp = F.L[invperm(F.p), 1:F.rank]
.
The following functions are available forCholeskyPivoted
objects:size
,\
,inv
,det
, andrank
.
Iterating the decomposition produces the componentsL
andU
.
Examples
julia> X = [1.0, 2.0, 3.0, 4.0];julia> A = X * X';julia> C = cholesky(A, RowMaximum(), check = false)CholeskyPivoted{Float64, Matrix{Float64}, Vector{Int64}}U factor with rank 1:4×4 UpperTriangular{Float64, Matrix{Float64}}: 4.0 2.0 3.0 1.0 ⋅ 0.0 6.0 2.0 ⋅ ⋅ 9.0 3.0 ⋅ ⋅ ⋅ 1.0permutation:4-element Vector{Int64}: 4 2 3 1julia> C.U[1:C.rank, :]' * C.U[1:C.rank, :] ≈ A[C.p, C.p]truejulia> l, u = C; # destructuring via iterationjulia> l == C.L && u == C.Utrue
LinearAlgebra.cholesky
—Functioncholesky(A, NoPivot(); check = true) -> Cholesky
Compute the Cholesky factorization of a dense symmetric positive definite matrixA
and return aCholesky
factorization. The matrixA
can either be aSymmetric
orHermitian
AbstractMatrix
or aperfectly symmetric or HermitianAbstractMatrix
.
The triangular Cholesky factor can be obtained from the factorizationF
viaF.L
andF.U
, whereA ≈ F.U' * F.U ≈ F.L * F.L'
.
The following functions are available forCholesky
objects:size
,\
,inv
,det
,logdet
andisposdef
.
If you have a matrixA
that is slightly non-Hermitian due to roundoff errors in its construction, wrap it inHermitian(A)
before passing it tocholesky
in order to treat it as perfectly Hermitian.
Whencheck = true
, an error is thrown if the decomposition fails. Whencheck = false
, responsibility for checking the decomposition's validity (viaissuccess
) lies with the user.
Examples
julia> A = [4. 12. -16.; 12. 37. -43.; -16. -43. 98.]3×3 Matrix{Float64}: 4.0 12.0 -16.0 12.0 37.0 -43.0 -16.0 -43.0 98.0julia> C = cholesky(A)Cholesky{Float64, Matrix{Float64}}U factor:3×3 UpperTriangular{Float64, Matrix{Float64}}: 2.0 6.0 -8.0 ⋅ 1.0 5.0 ⋅ ⋅ 3.0julia> C.U3×3 UpperTriangular{Float64, Matrix{Float64}}: 2.0 6.0 -8.0 ⋅ 1.0 5.0 ⋅ ⋅ 3.0julia> C.L3×3 LowerTriangular{Float64, Matrix{Float64}}: 2.0 ⋅ ⋅ 6.0 1.0 ⋅ -8.0 5.0 3.0julia> C.L * C.U == Atrue
cholesky(A, RowMaximum(); tol = 0.0, check = true) -> CholeskyPivoted
Compute the pivoted Cholesky factorization of a dense symmetric positive semi-definite matrixA
and return aCholeskyPivoted
factorization. The matrixA
can either be aSymmetric
orHermitian
AbstractMatrix
or aperfectly symmetric or HermitianAbstractMatrix
.
The triangular Cholesky factor can be obtained from the factorizationF
viaF.L
andF.U
, and the permutation viaF.p
, whereA[F.p, F.p] ≈ Ur' * Ur ≈ Lr * Lr'
withUr = F.U[1:F.rank, :]
andLr = F.L[:, 1:F.rank]
, or alternativelyA ≈ Up' * Up ≈ Lp * Lp'
withUp = F.U[1:F.rank, invperm(F.p)]
andLp = F.L[invperm(F.p), 1:F.rank]
.
The following functions are available forCholeskyPivoted
objects:size
,\
,inv
,det
, andrank
.
The argumenttol
determines the tolerance for determining the rank. For negative values, the tolerance is the machine precision.
If you have a matrixA
that is slightly non-Hermitian due to roundoff errors in its construction, wrap it inHermitian(A)
before passing it tocholesky
in order to treat it as perfectly Hermitian.
Whencheck = true
, an error is thrown if the decomposition fails. Whencheck = false
, responsibility for checking the decomposition's validity (viaissuccess
) lies with the user.
Examples
julia> X = [1.0, 2.0, 3.0, 4.0];julia> A = X * X';julia> C = cholesky(A, RowMaximum(), check = false)CholeskyPivoted{Float64, Matrix{Float64}, Vector{Int64}}U factor with rank 1:4×4 UpperTriangular{Float64, Matrix{Float64}}: 4.0 2.0 3.0 1.0 ⋅ 0.0 6.0 2.0 ⋅ ⋅ 9.0 3.0 ⋅ ⋅ ⋅ 1.0permutation:4-element Vector{Int64}: 4 2 3 1julia> C.U[1:C.rank, :]' * C.U[1:C.rank, :] ≈ A[C.p, C.p]truejulia> l, u = C; # destructuring via iterationjulia> l == C.L && u == C.Utrue
cholesky(A::SparseMatrixCSC; shift = 0.0, check = true, perm = nothing) -> CHOLMOD.Factor
Compute the Cholesky factorization of a sparse positive definite matrixA
.A
must be aSparseMatrixCSC
or aSymmetric
/Hermitian
view of aSparseMatrixCSC
. Note that even ifA
doesn't have the type tag, it must still be symmetric or Hermitian. Ifperm
is not given, a fill-reducing permutation is used.F = cholesky(A)
is most frequently used to solve systems of equations withF\b
, but also the methodsdiag
,det
, andlogdet
are defined forF
. You can also extract individual factors fromF
, usingF.L
. However, since pivoting is on by default, the factorization is internally represented asA == P'*L*L'*P
with a permutation matrixP
; using justL
without accounting forP
will give incorrect answers. To include the effects of permutation, it's typically preferable to extract "combined" factors likePtL = F.PtL
(the equivalent ofP'*L
) andLtP = F.UP
(the equivalent ofL'*P
).
Whencheck = true
, an error is thrown if the decomposition fails. Whencheck = false
, responsibility for checking the decomposition's validity (viaissuccess
) lies with the user.
Setting the optionalshift
keyword argument computes the factorization ofA+shift*I
instead ofA
. If theperm
argument is provided, it should be a permutation of1:size(A,1)
giving the ordering to use (instead of CHOLMOD's default AMD ordering).
Examples
In the following example, the fill-reducing permutation used is[3, 2, 1]
. Ifperm
is set to1:3
to enforce no permutation, the number of nonzero elements in the factor is 6.
julia> A = [2 1 1; 1 2 0; 1 0 2]3×3 Matrix{Int64}: 2 1 1 1 2 0 1 0 2julia> C = cholesky(sparse(A))SparseArrays.CHOLMOD.Factor{Float64, Int64}type: LLtmethod: simplicialmaxnnz: 5nnz: 5success: truejulia> C.p3-element Vector{Int64}: 3 2 1julia> L = sparse(C.L);julia> Matrix(L)3×3 Matrix{Float64}: 1.41421 0.0 0.0 0.0 1.41421 0.0 0.707107 0.707107 1.0julia> L * L' ≈ A[C.p, C.p]truejulia> P = sparse(1:3, C.p, ones(3))3×3 SparseMatrixCSC{Float64, Int64} with 3 stored entries: ⋅ ⋅ 1.0 ⋅ 1.0 ⋅ 1.0 ⋅ ⋅julia> P' * L * L' * P ≈ Atruejulia> C = cholesky(sparse(A), perm=1:3)SparseArrays.CHOLMOD.Factor{Float64, Int64}type: LLtmethod: simplicialmaxnnz: 6nnz: 6success: truejulia> L = sparse(C.L);julia> Matrix(L)3×3 Matrix{Float64}: 1.41421 0.0 0.0 0.707107 1.22474 0.0 0.707107 -0.408248 1.1547julia> L * L' ≈ Atrue
This method uses the CHOLMOD[ACM887][DavisHager2009] library fromSuiteSparse. CHOLMOD only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.
Many other functions from CHOLMOD are wrapped but not exported from theBase.SparseArrays.CHOLMOD
module.
LinearAlgebra.cholesky!
—Functioncholesky!(A::AbstractMatrix, NoPivot(); check = true) -> Cholesky
The same ascholesky
, but saves space by overwriting the inputA
, instead of creating a copy. AnInexactError
exception is thrown if the factorization produces a number not representable by the element type ofA
, e.g. for integer types.
Examples
julia> A = [1 2; 2 50]2×2 Matrix{Int64}: 1 2 2 50julia> cholesky!(A)ERROR: InexactError: Int64(6.782329983125268)Stacktrace:[...]
cholesky!(A::AbstractMatrix, RowMaximum(); tol = 0.0, check = true) -> CholeskyPivoted
The same ascholesky
, but saves space by overwriting the inputA
, instead of creating a copy. AnInexactError
exception is thrown if the factorization produces a number not representable by the element type ofA
, e.g. for integer types.
cholesky!(F::CHOLMOD.Factor, A::SparseMatrixCSC; shift = 0.0, check = true) -> CHOLMOD.Factor
Compute the Cholesky ($LL'$) factorization ofA
, reusing the symbolic factorizationF
.A
must be aSparseMatrixCSC
or aSymmetric
/Hermitian
view of aSparseMatrixCSC
. Note that even ifA
doesn't have the type tag, it must still be symmetric or Hermitian.
See alsocholesky
.
This method uses the CHOLMOD library from SuiteSparse, which only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.
LinearAlgebra.lowrankupdate
—Functionlowrankupdate(C::Cholesky, v::AbstractVector) -> CC::Cholesky
Update a Cholesky factorizationC
with the vectorv
. IfA = C.U'C.U
thenCC = cholesky(C.U'C.U + v*v')
but the computation ofCC
only usesO(n^2)
operations.
lowrankupdate(F::CHOLMOD.Factor, C::AbstractArray) -> FF::CHOLMOD.Factor
Get anLDLt
Factorization ofA + C*C'
given anLDLt
orLLt
factorizationF
ofA
.
The returned factor is always anLDLt
factorization.
LinearAlgebra.lowrankdowndate
—Functionlowrankdowndate(C::Cholesky, v::AbstractVector) -> CC::Cholesky
Downdate a Cholesky factorizationC
with the vectorv
. IfA = C.U'C.U
thenCC = cholesky(C.U'C.U - v*v')
but the computation ofCC
only usesO(n^2)
operations.
lowrankdowndate(F::CHOLMOD.Factor, C::AbstractArray) -> FF::CHOLMOD.Factor
Get anLDLt
Factorization ofA + C*C'
given anLDLt
orLLt
factorizationF
ofA
.
The returned factor is always anLDLt
factorization.
See alsolowrankdowndate!
,lowrankupdate
,lowrankupdate!
.
LinearAlgebra.lowrankupdate!
—Functionlowrankupdate!(C::Cholesky, v::AbstractVector) -> CC::Cholesky
Update a Cholesky factorizationC
with the vectorv
. IfA = C.U'C.U
thenCC = cholesky(C.U'C.U + v*v')
but the computation ofCC
only usesO(n^2)
operations. The input factorizationC
is updated in place such that on exitC == CC
. The vectorv
is destroyed during the computation.
lowrankupdate!(F::CHOLMOD.Factor, C::AbstractArray)
Update anLDLt
orLLt
FactorizationF
ofA
to a factorization ofA + C*C'
.
LLt
factorizations are converted toLDLt
.
See alsolowrankupdate
,lowrankdowndate
,lowrankdowndate!
.
LinearAlgebra.lowrankdowndate!
—Functionlowrankdowndate!(C::Cholesky, v::AbstractVector) -> CC::Cholesky
Downdate a Cholesky factorizationC
with the vectorv
. IfA = C.U'C.U
thenCC = cholesky(C.U'C.U - v*v')
but the computation ofCC
only usesO(n^2)
operations. The input factorizationC
is updated in place such that on exitC == CC
. The vectorv
is destroyed during the computation.
lowrankdowndate!(F::CHOLMOD.Factor, C::AbstractArray)
Update anLDLt
orLLt
FactorizationF
ofA
to a factorization ofA - C*C'
.
LLt
factorizations are converted toLDLt
.
See alsolowrankdowndate
,lowrankupdate
,lowrankupdate!
.
LinearAlgebra.LDLt
—TypeLDLt <: Factorization
Matrix factorization type of theLDLt
factorization of a realSymTridiagonal
matrixS
such thatS = L*Diagonal(d)*L'
, whereL
is aUnitLowerTriangular
matrix andd
is a vector. The main use of anLDLt
factorizationF = ldlt(S)
is to solve the linear system of equationsSx = b
withF\b
. This is the return type ofldlt
, the corresponding matrix factorization function.
The individual components of the factorizationF::LDLt
can be accessed viagetproperty
:
Component | Description |
---|---|
F.L | L (unit lower triangular) part ofLDLt |
F.D | D (diagonal) part ofLDLt |
F.Lt | Lt (unit upper triangular) part ofLDLt |
F.d | diagonal values ofD as aVector |
Examples
julia> S = SymTridiagonal([3., 4., 5.], [1., 2.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 3.0 1.0 ⋅ 1.0 4.0 2.0 ⋅ 2.0 5.0julia> F = ldlt(S)LDLt{Float64, SymTridiagonal{Float64, Vector{Float64}}}L factor:3×3 UnitLowerTriangular{Float64, SymTridiagonal{Float64, Vector{Float64}}}: 1.0 ⋅ ⋅ 0.333333 1.0 ⋅ 0.0 0.545455 1.0D factor:3×3 Diagonal{Float64, Vector{Float64}}: 3.0 ⋅ ⋅ ⋅ 3.66667 ⋅ ⋅ ⋅ 3.90909
LinearAlgebra.ldlt
—Functionldlt(S::SymTridiagonal) -> LDLt
Compute anLDLt
(i.e.,$LDL^T$) factorization of the real symmetric tridiagonal matrixS
such thatS = L*Diagonal(d)*L'
whereL
is a unit lower triangular matrix andd
is a vector. The main use of anLDLt
factorizationF = ldlt(S)
is to solve the linear system of equationsSx = b
withF\b
.
See alsobunchkaufman
for a similar, but pivoted, factorization of arbitrary symmetric or Hermitian matrices.
Examples
julia> S = SymTridiagonal([3., 4., 5.], [1., 2.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 3.0 1.0 ⋅ 1.0 4.0 2.0 ⋅ 2.0 5.0julia> ldltS = ldlt(S);julia> b = [6., 7., 8.];julia> ldltS \ b3-element Vector{Float64}: 1.7906976744186047 0.627906976744186 1.3488372093023255julia> S \ b3-element Vector{Float64}: 1.7906976744186047 0.627906976744186 1.3488372093023255
ldlt(A::SparseMatrixCSC; shift = 0.0, check = true, perm=nothing) -> CHOLMOD.Factor
Compute the$LDL'$ factorization of a sparse matrixA
.A
must be aSparseMatrixCSC
or aSymmetric
/Hermitian
view of aSparseMatrixCSC
. Note that even ifA
doesn't have the type tag, it must still be symmetric or Hermitian. A fill-reducing permutation is used.F = ldlt(A)
is most frequently used to solve systems of equationsA*x = b
withF\b
. The returned factorization objectF
also supports the methodsdiag
,det
,logdet
, andinv
. You can extract individual factors fromF
usingF.L
. However, since pivoting is on by default, the factorization is internally represented asA == P'*L*D*L'*P
with a permutation matrixP
; using justL
without accounting forP
will give incorrect answers. To include the effects of permutation, it is typically preferable to extract "combined" factors likePtL = F.PtL
(the equivalent ofP'*L
) andLtP = F.UP
(the equivalent ofL'*P
). The complete list of supported factors is:L, :PtL, :D, :UP, :U, :LD, :DU, :PtLD, :DUP
.
Whencheck = true
, an error is thrown if the decomposition fails. Whencheck = false
, responsibility for checking the decomposition's validity (viaissuccess
) lies with the user.
Setting the optionalshift
keyword argument computes the factorization ofA+shift*I
instead ofA
. If theperm
argument is provided, it should be a permutation of1:size(A,1)
giving the ordering to use (instead of CHOLMOD's default AMD ordering).
This method uses the CHOLMOD[ACM887][DavisHager2009] library fromSuiteSparse. CHOLMOD only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.
Many other functions from CHOLMOD are wrapped but not exported from theBase.SparseArrays.CHOLMOD
module.
LinearAlgebra.ldlt!
—Functionldlt!(S::SymTridiagonal) -> LDLt
Same asldlt
, but saves space by overwriting the inputS
, instead of creating a copy.
Examples
julia> S = SymTridiagonal([3., 4., 5.], [1., 2.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 3.0 1.0 ⋅ 1.0 4.0 2.0 ⋅ 2.0 5.0julia> ldltS = ldlt!(S);julia> ldltS === Sfalsejulia> S3×3 SymTridiagonal{Float64, Vector{Float64}}: 3.0 0.333333 ⋅ 0.333333 3.66667 0.545455 ⋅ 0.545455 3.90909
ldlt!(F::CHOLMOD.Factor, A::SparseMatrixCSC; shift = 0.0, check = true) -> CHOLMOD.Factor
Compute the$LDL'$ factorization ofA
, reusing the symbolic factorizationF
.A
must be aSparseMatrixCSC
or aSymmetric
/Hermitian
view of aSparseMatrixCSC
. Note that even ifA
doesn't have the type tag, it must still be symmetric or Hermitian.
See alsoldlt
.
This method uses the CHOLMOD library fromSuiteSparse, which only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.
LinearAlgebra.QR
—TypeQR <: Factorization
A QR matrix factorization stored in a packed format, typically obtained fromqr
. If$A$ is anm
×n
matrix, then
\[A = Q R\]
where$Q$ is an orthogonal/unitary matrix and$R$ is upper triangular. The matrix$Q$ is stored as a sequence of Householder reflectors$v_i$ and coefficients$\tau_i$ where:
\[Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T).\]
Iterating the decomposition produces the componentsQ
andR
.
The object has two fields:
factors
is anm
×n
matrix.
The upper triangular part contains the elements of$R$, that isR = triu(F.factors)
for aQR
objectF
.
The subdiagonal part contains the reflectors$v_i$ stored in a packed format where$v_i$ is the$i$th column of the matrixV = I + tril(F.factors, -1)
.
τ
is a vector of lengthmin(m,n)
containing the coefficients$au_i$.
LinearAlgebra.QRCompactWY
—TypeQRCompactWY <: Factorization
A QR matrix factorization stored in a compact blocked format, typically obtained fromqr
. If$A$ is anm
×n
matrix, then
\[A = Q R\]
where$Q$ is an orthogonal/unitary matrix and$R$ is upper triangular. It is similar to theQR
format except that the orthogonal/unitary matrix$Q$ is stored inCompact WY format[Schreiber1989]. For the block size$n_b$, it is stored as am
×n
lower trapezoidal matrix$V$ and a matrix$T = (T_1 \; T_2 \; ... \; T_{b-1} \; T_b')$ composed of$b = \lceil \min(m,n) / n_b \rceil$ upper triangular matrices$T_j$ of size$n_b$×$n_b$ ($j = 1, ..., b-1$) and an upper trapezoidal$n_b$×$\min(m,n) - (b-1) n_b$ matrix$T_b'$ ($j=b$) whose upper square part denoted with$T_b$ satisfying
\[Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T)= \prod_{j=1}^{b} (I - V_j T_j V_j^T)\]
such that$v_i$ is the$i$th column of$V$,$\tau_i$ is the$i$th element of[diag(T_1); diag(T_2); …; diag(T_b)]
, and$(V_1 \; V_2 \; ... \; V_b)$ is the leftm
×min(m, n)
block of$V$. When constructed usingqr
, the block size is given by$n_b = \min(m, n, 36)$.
Iterating the decomposition produces the componentsQ
andR
.
The object has two fields:
factors
, as in theQR
type, is anm
×n
matrix.
The upper triangular part contains the elements of$R$, that isR = triu(F.factors)
for aQR
objectF
.
The subdiagonal part contains the reflectors$v_i$ stored in a packed format such thatV = I + tril(F.factors, -1)
.
T
is a$n_b$-by-$\min(m,n)$ matrix as described above. The subdiagonal elements for each triangular matrix$T_j$ are ignored.
This format should not to be confused with the olderWY representation[Bischof1987].
LinearAlgebra.QRPivoted
—TypeQRPivoted <: Factorization
A QR matrix factorization with column pivoting in a packed format, typically obtained fromqr
. If$A$ is anm
×n
matrix, then
\[A P = Q R\]
where$P$ is a permutation matrix,$Q$ is an orthogonal/unitary matrix and$R$ is upper triangular. The matrix$Q$ is stored as a sequence of Householder reflectors:
\[Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T).\]
Iterating the decomposition produces the componentsQ
,R
, andp
.
The object has three fields:
factors
is anm
×n
matrix.
The upper triangular part contains the elements of$R$, that isR = triu(F.factors)
for aQR
objectF
.
The subdiagonal part contains the reflectors$v_i$ stored in a packed format where$v_i$ is the$i$th column of the matrixV = I + tril(F.factors, -1)
.
τ
is a vector of lengthmin(m,n)
containing the coefficients$au_i$.
jpvt
is an integer vector of lengthn
corresponding to the permutation$P$.
LinearAlgebra.qr
—Functionqr(A::SparseMatrixCSC; tol=_default_tol(A), ordering=ORDERING_DEFAULT) -> QRSparse
Compute theQR
factorization of a sparse matrixA
. Fill-reducing row and column permutations are used such thatF.R = F.Q'*A[F.prow,F.pcol]
. The main application of this type is to solve least squares or underdetermined problems with\
. The function calls the C library SPQR[ACM933].
qr(A::SparseMatrixCSC)
uses the SPQR library that is part ofSuiteSparse. As this library only supports sparse matrices withFloat64
orComplexF64
elements, as of Julia v1.4qr
convertsA
into a copy that is of typeSparseMatrixCSC{Float64}
orSparseMatrixCSC{ComplexF64}
as appropriate.
Examples
julia> A = sparse([1,2,3,4], [1,1,2,2], [1.0,1.0,1.0,1.0])4×2 SparseMatrixCSC{Float64, Int64} with 4 stored entries: 1.0 ⋅ 1.0 ⋅ ⋅ 1.0 ⋅ 1.0julia> qr(A)SparseArrays.SPQR.QRSparse{Float64, Int64}Q factor:4×4 SparseArrays.SPQR.QRSparseQ{Float64, Int64}R factor:2×2 SparseMatrixCSC{Float64, Int64} with 2 stored entries: -1.41421 ⋅ ⋅ -1.41421Row permutation:4-element Vector{Int64}: 1 3 4 2Column permutation:2-element Vector{Int64}: 1 2
qr(A, pivot = NoPivot(); blocksize) -> F
Compute the QR factorization of the matrixA
: an orthogonal (or unitary ifA
is complex-valued) matrixQ
, and an upper triangular matrixR
such that
\[A = Q R\]
The returned objectF
stores the factorization in a packed format:
ifpivot == ColumnNorm()
thenF
is aQRPivoted
object,
otherwise if the element type ofA
is a BLAS type (Float32
,Float64
,ComplexF32
orComplexF64
), thenF
is aQRCompactWY
object,
otherwiseF
is aQR
object.
The individual components of the decompositionF
can be retrieved via property accessors:
F.Q
: the orthogonal/unitary matrixQ
F.R
: the upper triangular matrixR
F.p
: the permutation vector of the pivot (QRPivoted
only)F.P
: the permutation matrix of the pivot (QRPivoted
only)Each reference to the upper triangular factor viaF.R
allocates a new array. It is therefore advisable to cache that array, say, byR = F.R
and continue working withR
.
Iterating the decomposition produces the componentsQ
,R
, and if extantp
.
The following functions are available for theQR
objects:inv
,size
, and\
. WhenA
is rectangular,\
will return a least squares solution and if the solution is not unique, the one with smallest norm is returned. WhenA
is not full rank, factorization with (column) pivoting is required to obtain a minimum norm solution.
Multiplication with respect to either full/square or non-full/squareQ
is allowed, i.e. bothF.Q*F.R
andF.Q*A
are supported. AQ
matrix can be converted into a regular matrix withMatrix
. This operation returns the "thin" Q factor, i.e., ifA
ism
×n
withm>=n
, thenMatrix(F.Q)
yields anm
×n
matrix with orthonormal columns. To retrieve the "full" Q factor, anm
×m
orthogonal matrix, useF.Q*I
orcollect(F.Q)
. Ifm<=n
, thenMatrix(F.Q)
yields anm
×m
orthogonal matrix.
The block size for QR decomposition can be specified by keyword argumentblocksize :: Integer
whenpivot == NoPivot()
andA isa StridedMatrix{<:BlasFloat}
. It is ignored whenblocksize > minimum(size(A))
. SeeQRCompactWY
.
Theblocksize
keyword argument requires Julia 1.4 or later.
Examples
julia> A = [3.0 -6.0; 4.0 -8.0; 0.0 1.0]3×2 Matrix{Float64}: 3.0 -6.0 4.0 -8.0 0.0 1.0julia> F = qr(A)LinearAlgebra.QRCompactWY{Float64, Matrix{Float64}, Matrix{Float64}}Q factor: 3×3 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}R factor:2×2 Matrix{Float64}: -5.0 10.0 0.0 -1.0julia> F.Q * F.R == Atrue
qr
returns multiple types because LAPACK uses several representations that minimize the memory storage requirements of products of Householder elementary reflectors, so that theQ
andR
matrices can be stored compactly rather as two separate dense matrices.
LinearAlgebra.qr!
—Functionqr!(A, pivot = NoPivot(); blocksize)
qr!
is the same asqr
whenA
is a subtype ofAbstractMatrix
, but saves space by overwriting the inputA
, instead of creating a copy. AnInexactError
exception is thrown if the factorization produces a number not representable by the element type ofA
, e.g. for integer types.
Theblocksize
keyword argument requires Julia 1.4 or later.
Examples
julia> a = [1. 2.; 3. 4.]2×2 Matrix{Float64}: 1.0 2.0 3.0 4.0julia> qr!(a)LinearAlgebra.QRCompactWY{Float64, Matrix{Float64}, Matrix{Float64}}Q factor: 2×2 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}R factor:2×2 Matrix{Float64}: -3.16228 -4.42719 0.0 -0.632456julia> a = [1 2; 3 4]2×2 Matrix{Int64}: 1 2 3 4julia> qr!(a)ERROR: InexactError: Int64(3.1622776601683795)Stacktrace:[...]
LinearAlgebra.LQ
—TypeLQ <: Factorization
Matrix factorization type of theLQ
factorization of a matrixA
. TheLQ
decomposition is theQR
decomposition oftranspose(A)
. This is the return type oflq
, the corresponding matrix factorization function.
IfS::LQ
is the factorization object, the lower triangular component can be obtained viaS.L
, and the orthogonal/unitary component viaS.Q
, such thatA ≈ S.L*S.Q
.
Iterating the decomposition produces the componentsS.L
andS.Q
.
Examples
julia> A = [5. 7.; -2. -4.]2×2 Matrix{Float64}: 5.0 7.0 -2.0 -4.0julia> S = lq(A)LQ{Float64, Matrix{Float64}, Vector{Float64}}L factor:2×2 Matrix{Float64}: -8.60233 0.0 4.41741 -0.697486Q factor: 2×2 LinearAlgebra.LQPackedQ{Float64, Matrix{Float64}, Vector{Float64}}julia> S.L * S.Q2×2 Matrix{Float64}: 5.0 7.0 -2.0 -4.0julia> l, q = S; # destructuring via iterationjulia> l == S.L && q == S.Qtrue
LinearAlgebra.lq
—Functionlq(A) -> S::LQ
Compute the LQ decomposition ofA
. The decomposition's lower triangular component can be obtained from theLQ
objectS
viaS.L
, and the orthogonal/unitary component viaS.Q
, such thatA ≈ S.L*S.Q
.
Iterating the decomposition produces the componentsS.L
andS.Q
.
The LQ decomposition is the QR decomposition oftranspose(A)
, and it is useful in order to compute the minimum-norm solutionlq(A) \ b
to an underdetermined system of equations (A
has more columns than rows, but has full row rank).
Examples
julia> A = [5. 7.; -2. -4.]2×2 Matrix{Float64}: 5.0 7.0 -2.0 -4.0julia> S = lq(A)LQ{Float64, Matrix{Float64}, Vector{Float64}}L factor:2×2 Matrix{Float64}: -8.60233 0.0 4.41741 -0.697486Q factor: 2×2 LinearAlgebra.LQPackedQ{Float64, Matrix{Float64}, Vector{Float64}}julia> S.L * S.Q2×2 Matrix{Float64}: 5.0 7.0 -2.0 -4.0julia> l, q = S; # destructuring via iterationjulia> l == S.L && q == S.Qtrue
LinearAlgebra.lq!
—FunctionLinearAlgebra.BunchKaufman
—TypeBunchKaufman <: Factorization
Matrix factorization type of the Bunch-Kaufman factorization of a symmetric or Hermitian matrixA
asP'UDU'P
orP'LDL'P
, depending on whether the upper (the default) or the lower triangle is stored inA
. IfA
is complex symmetric thenU'
andL'
denote the unconjugated transposes, i.e.transpose(U)
andtranspose(L)
, respectively. This is the return type ofbunchkaufman
, the corresponding matrix factorization function.
IfS::BunchKaufman
is the factorization object, the components can be obtained viaS.D
,S.U
orS.L
as appropriate givenS.uplo
, andS.p
.
Iterating the decomposition produces the componentsS.D
,S.U
orS.L
as appropriate givenS.uplo
, andS.p
.
Examples
julia> A = Float64.([1 2; 2 3])2×2 Matrix{Float64}: 1.0 2.0 2.0 3.0julia> S = bunchkaufman(A) # A gets wrapped internally by Symmetric(A)BunchKaufman{Float64, Matrix{Float64}, Vector{Int64}}D factor:2×2 Tridiagonal{Float64, Vector{Float64}}: -0.333333 0.0 0.0 3.0U factor:2×2 UnitUpperTriangular{Float64, Matrix{Float64}}: 1.0 0.666667 ⋅ 1.0permutation:2-element Vector{Int64}: 1 2julia> d, u, p = S; # destructuring via iterationjulia> d == S.D && u == S.U && p == S.ptruejulia> S = bunchkaufman(Symmetric(A, :L))BunchKaufman{Float64, Matrix{Float64}, Vector{Int64}}D factor:2×2 Tridiagonal{Float64, Vector{Float64}}: 3.0 0.0 0.0 -0.333333L factor:2×2 UnitLowerTriangular{Float64, Matrix{Float64}}: 1.0 ⋅ 0.666667 1.0permutation:2-element Vector{Int64}: 2 1
LinearAlgebra.bunchkaufman
—Functionbunchkaufman(A, rook::Bool=false; check = true) -> S::BunchKaufman
Compute the Bunch-Kaufman[Bunch1977] factorization of a symmetric or Hermitian matrixA
asP'*U*D*U'*P
orP'*L*D*L'*P
, depending on which triangle is stored inA
, and return aBunchKaufman
object. Note that ifA
is complex symmetric thenU'
andL'
denote the unconjugated transposes, i.e.transpose(U)
andtranspose(L)
.
Iterating the decomposition produces the componentsS.D
,S.U
orS.L
as appropriate givenS.uplo
, andS.p
.
Ifrook
istrue
, rook pivoting is used. Ifrook
is false, rook pivoting is not used.
Whencheck = true
, an error is thrown if the decomposition fails. Whencheck = false
, responsibility for checking the decomposition's validity (viaissuccess
) lies with the user.
The following functions are available forBunchKaufman
objects:size
,\
,inv
,issymmetric
,ishermitian
,getindex
.
Examples
julia> A = Float64.([1 2; 2 3])2×2 Matrix{Float64}: 1.0 2.0 2.0 3.0julia> S = bunchkaufman(A) # A gets wrapped internally by Symmetric(A)BunchKaufman{Float64, Matrix{Float64}, Vector{Int64}}D factor:2×2 Tridiagonal{Float64, Vector{Float64}}: -0.333333 0.0 0.0 3.0U factor:2×2 UnitUpperTriangular{Float64, Matrix{Float64}}: 1.0 0.666667 ⋅ 1.0permutation:2-element Vector{Int64}: 1 2julia> d, u, p = S; # destructuring via iterationjulia> d == S.D && u == S.U && p == S.ptruejulia> S.U*S.D*S.U' - S.P*A*S.P'2×2 Matrix{Float64}: 0.0 0.0 0.0 0.0julia> S = bunchkaufman(Symmetric(A, :L))BunchKaufman{Float64, Matrix{Float64}, Vector{Int64}}D factor:2×2 Tridiagonal{Float64, Vector{Float64}}: 3.0 0.0 0.0 -0.333333L factor:2×2 UnitLowerTriangular{Float64, Matrix{Float64}}: 1.0 ⋅ 0.666667 1.0permutation:2-element Vector{Int64}: 2 1julia> S.L*S.D*S.L' - A[S.p, S.p]2×2 Matrix{Float64}: 0.0 0.0 0.0 0.0
LinearAlgebra.bunchkaufman!
—Functionbunchkaufman!(A, rook::Bool=false; check = true) -> BunchKaufman
bunchkaufman!
is the same asbunchkaufman
, but saves space by overwriting the inputA
, instead of creating a copy.
LinearAlgebra.Eigen
—TypeEigen <: Factorization
Matrix factorization type of the eigenvalue/spectral decomposition of a square matrixA
. This is the return type ofeigen
, the corresponding matrix factorization function.
IfF::Eigen
is the factorization object, the eigenvalues can be obtained viaF.values
and the eigenvectors as the columns of the matrixF.vectors
. (Thek
th eigenvector can be obtained from the sliceF.vectors[:, k]
.)
Iterating the decomposition produces the componentsF.values
andF.vectors
.
Examples
julia> F = eigen([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}values:3-element Vector{Float64}: 1.0 3.0 18.0vectors:3×3 Matrix{Float64}: 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0julia> F.values3-element Vector{Float64}: 1.0 3.0 18.0julia> F.vectors3×3 Matrix{Float64}: 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0julia> vals, vecs = F; # destructuring via iterationjulia> vals == F.values && vecs == F.vectorstrue
LinearAlgebra.GeneralizedEigen
—TypeGeneralizedEigen <: Factorization
Matrix factorization type of the generalized eigenvalue/spectral decomposition ofA
andB
. This is the return type ofeigen
, the corresponding matrix factorization function, when called with two matrix arguments.
IfF::GeneralizedEigen
is the factorization object, the eigenvalues can be obtained viaF.values
and the eigenvectors as the columns of the matrixF.vectors
. (Thek
th eigenvector can be obtained from the sliceF.vectors[:, k]
.)
Iterating the decomposition produces the componentsF.values
andF.vectors
.
Examples
julia> A = [1 0; 0 -1]2×2 Matrix{Int64}: 1 0 0 -1julia> B = [0 1; 1 0]2×2 Matrix{Int64}: 0 1 1 0julia> F = eigen(A, B)GeneralizedEigen{ComplexF64, ComplexF64, Matrix{ComplexF64}, Vector{ComplexF64}}values:2-element Vector{ComplexF64}: 0.0 - 1.0im 0.0 + 1.0imvectors:2×2 Matrix{ComplexF64}: 0.0+1.0im 0.0-1.0im -1.0+0.0im -1.0-0.0imjulia> F.values2-element Vector{ComplexF64}: 0.0 - 1.0im 0.0 + 1.0imjulia> F.vectors2×2 Matrix{ComplexF64}: 0.0+1.0im 0.0-1.0im -1.0+0.0im -1.0-0.0imjulia> vals, vecs = F; # destructuring via iterationjulia> vals == F.values && vecs == F.vectorstrue
LinearAlgebra.eigvals
—Functioneigvals(A; permute::Bool=true, scale::Bool=true, sortby) -> values
Return the eigenvalues ofA
.
For general non-symmetric matrices it is possible to specify how the matrix is balanced before the eigenvalue calculation. Thepermute
,scale
, andsortby
keywords are the same as foreigen
.
Examples
julia> diag_matrix = [1 0; 0 4]2×2 Matrix{Int64}: 1 0 0 4julia> eigvals(diag_matrix)2-element Vector{Float64}: 1.0 4.0
For a scalar input,eigvals
will return a scalar.
Examples
julia> eigvals(-2)-2
eigvals(A, B) -> values
Compute the generalized eigenvalues ofA
andB
.
Examples
julia> A = [1 0; 0 -1]2×2 Matrix{Int64}: 1 0 0 -1julia> B = [0 1; 1 0]2×2 Matrix{Int64}: 0 1 1 0julia> eigvals(A,B)2-element Vector{ComplexF64}: 0.0 - 1.0im 0.0 + 1.0im
eigvals(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> values
Return the eigenvalues ofA
. It is possible to calculate only a subset of the eigenvalues by specifying aUnitRange
irange
covering indices of the sorted eigenvalues, e.g. the 2nd to 8th eigenvalues.
Examples
julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 1.0 2.0 ⋅ 2.0 2.0 3.0 ⋅ 3.0 1.0julia> eigvals(A, 2:2)1-element Vector{Float64}: 0.9999999999999996julia> eigvals(A)3-element Vector{Float64}: -2.1400549446402604 1.0000000000000002 5.140054944640259
eigvals(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> values
Return the eigenvalues ofA
. It is possible to calculate only a subset of the eigenvalues by specifying a pairvl
andvu
for the lower and upper boundaries of the eigenvalues.
Examples
julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 1.0 2.0 ⋅ 2.0 2.0 3.0 ⋅ 3.0 1.0julia> eigvals(A, -1, 2)1-element Vector{Float64}: 1.0000000000000009julia> eigvals(A)3-element Vector{Float64}: -2.1400549446402604 1.0000000000000002 5.140054944640259
LinearAlgebra.eigvals!
—Functioneigvals!(A; permute::Bool=true, scale::Bool=true, sortby) -> values
Same aseigvals
, but saves space by overwriting the inputA
, instead of creating a copy. Thepermute
,scale
, andsortby
keywords are the same as foreigen
.
The input matrixA
will not contain its eigenvalues aftereigvals!
is called on it -A
is used as a workspace.
Examples
julia> A = [1. 2.; 3. 4.]2×2 Matrix{Float64}: 1.0 2.0 3.0 4.0julia> eigvals!(A)2-element Vector{Float64}: -0.3722813232690143 5.372281323269014julia> A2×2 Matrix{Float64}: -0.372281 -1.0 0.0 5.37228
eigvals!(A, B; sortby) -> values
Same aseigvals
, but saves space by overwriting the inputA
(andB
), instead of creating copies.
The input matricesA
andB
will not contain their eigenvalues aftereigvals!
is called. They are used as workspaces.
Examples
julia> A = [1. 0.; 0. -1.]2×2 Matrix{Float64}: 1.0 0.0 0.0 -1.0julia> B = [0. 1.; 1. 0.]2×2 Matrix{Float64}: 0.0 1.0 1.0 0.0julia> eigvals!(A, B)2-element Vector{ComplexF64}: 0.0 - 1.0im 0.0 + 1.0imjulia> A2×2 Matrix{Float64}: -0.0 -1.0 1.0 -0.0julia> B2×2 Matrix{Float64}: 1.0 0.0 0.0 1.0
eigvals!(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> values
Same aseigvals
, but saves space by overwriting the inputA
, instead of creating a copy.irange
is a range of eigenvalueindices to search for - for instance, the 2nd to 8th eigenvalues.
eigvals!(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> values
Same aseigvals
, but saves space by overwriting the inputA
, instead of creating a copy.vl
is the lower bound of the interval to search for eigenvalues, andvu
is the upper bound.
LinearAlgebra.eigmax
—Functioneigmax(A; permute::Bool=true, scale::Bool=true)
Return the largest eigenvalue ofA
. The optionpermute=true
permutes the matrix to become closer to upper triangular, andscale=true
scales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues ofA
are complex, this method will fail, since complex numbers cannot be sorted.
Examples
julia> A = [0 im; -im 0]2×2 Matrix{Complex{Int64}}: 0+0im 0+1im 0-1im 0+0imjulia> eigmax(A)1.0julia> A = [0 im; -1 0]2×2 Matrix{Complex{Int64}}: 0+0im 0+1im -1+0im 0+0imjulia> eigmax(A)ERROR: DomainError with Complex{Int64}[0+0im 0+1im; -1+0im 0+0im]:`A` cannot have complex eigenvalues.Stacktrace:[...]
LinearAlgebra.eigmin
—Functioneigmin(A; permute::Bool=true, scale::Bool=true)
Return the smallest eigenvalue ofA
. The optionpermute=true
permutes the matrix to become closer to upper triangular, andscale=true
scales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues ofA
are complex, this method will fail, since complex numbers cannot be sorted.
Examples
julia> A = [0 im; -im 0]2×2 Matrix{Complex{Int64}}: 0+0im 0+1im 0-1im 0+0imjulia> eigmin(A)-1.0julia> A = [0 im; -1 0]2×2 Matrix{Complex{Int64}}: 0+0im 0+1im -1+0im 0+0imjulia> eigmin(A)ERROR: DomainError with Complex{Int64}[0+0im 0+1im; -1+0im 0+0im]:`A` cannot have complex eigenvalues.Stacktrace:[...]
LinearAlgebra.eigvecs
—Functioneigvecs(A::SymTridiagonal[, eigvals]) -> Matrix
Return a matrixM
whose columns are the eigenvectors ofA
. (Thek
th eigenvector can be obtained from the sliceM[:, k]
.)
If the optional vector of eigenvalueseigvals
is specified,eigvecs
returns the specific corresponding eigenvectors.
Examples
julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])3×3 SymTridiagonal{Float64, Vector{Float64}}: 1.0 2.0 ⋅ 2.0 2.0 3.0 ⋅ 3.0 1.0julia> eigvals(A)3-element Vector{Float64}: -2.1400549446402604 1.0000000000000002 5.140054944640259julia> eigvecs(A)3×3 Matrix{Float64}: 0.418304 -0.83205 0.364299 -0.656749 -7.39009e-16 0.754109 0.627457 0.5547 0.546448julia> eigvecs(A, [1.])3×1 Matrix{Float64}: 0.8320502943378438 4.263514128092366e-17 -0.5547001962252291
eigvecs(A; permute::Bool=true, scale::Bool=true, `sortby`) -> Matrix
Return a matrixM
whose columns are the eigenvectors ofA
. (Thek
th eigenvector can be obtained from the sliceM[:, k]
.) Thepermute
,scale
, andsortby
keywords are the same as foreigen
.
Examples
julia> eigvecs([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])3×3 Matrix{Float64}: 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0
eigvecs(A, B) -> Matrix
Return a matrixM
whose columns are the generalized eigenvectors ofA
andB
. (Thek
th eigenvector can be obtained from the sliceM[:, k]
.)
Examples
julia> A = [1 0; 0 -1]2×2 Matrix{Int64}: 1 0 0 -1julia> B = [0 1; 1 0]2×2 Matrix{Int64}: 0 1 1 0julia> eigvecs(A, B)2×2 Matrix{ComplexF64}: 0.0+1.0im 0.0-1.0im -1.0+0.0im -1.0-0.0im
LinearAlgebra.eigen
—Functioneigen(A; permute::Bool=true, scale::Bool=true, sortby) -> Eigen
Compute the eigenvalue decomposition ofA
, returning anEigen
factorization objectF
which contains the eigenvalues inF.values
and the eigenvectors in the columns of the matrixF.vectors
. This corresponds to solving an eigenvalue problem of the formAx = λx
, whereA
is a matrix,x
is an eigenvector, andλ
is an eigenvalue. (Thek
th eigenvector can be obtained from the sliceF.vectors[:, k]
.)
Iterating the decomposition produces the componentsF.values
andF.vectors
.
The following functions are available forEigen
objects:inv
,det
, andisposdef
.
For general nonsymmetric matrices it is possible to specify how the matrix is balanced before the eigenvector calculation. The optionpermute=true
permutes the matrix to become closer to upper triangular, andscale=true
scales the matrix by its diagonal elements to make rows and columns more equal in norm. The default istrue
for both options.
By default, the eigenvalues and vectors are sorted lexicographically by(real(λ),imag(λ))
. A different comparison functionby(λ)
can be passed tosortby
, or you can passsortby=nothing
to leave the eigenvalues in an arbitrary order. Some special matrix types (e.g.Diagonal
orSymTridiagonal
) may implement their own sorting convention and not accept asortby
keyword.
Examples
julia> F = eigen([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}values:3-element Vector{Float64}: 1.0 3.0 18.0vectors:3×3 Matrix{Float64}: 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0julia> F.values3-element Vector{Float64}: 1.0 3.0 18.0julia> F.vectors3×3 Matrix{Float64}: 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0julia> vals, vecs = F; # destructuring via iterationjulia> vals == F.values && vecs == F.vectorstrue
eigen(A, B; sortby) -> GeneralizedEigen
Compute the generalized eigenvalue decomposition ofA
andB
, returning aGeneralizedEigen
factorization objectF
which contains the generalized eigenvalues inF.values
and the generalized eigenvectors in the columns of the matrixF.vectors
. This corresponds to solving a generalized eigenvalue problem of the formAx = λBx
, whereA, B
are matrices,x
is an eigenvector, andλ
is an eigenvalue. (Thek
th generalized eigenvector can be obtained from the sliceF.vectors[:, k]
.)
Iterating the decomposition produces the componentsF.values
andF.vectors
.
By default, the eigenvalues and vectors are sorted lexicographically by(real(λ),imag(λ))
. A different comparison functionby(λ)
can be passed tosortby
, or you can passsortby=nothing
to leave the eigenvalues in an arbitrary order.
Examples
julia> A = [1 0; 0 -1]2×2 Matrix{Int64}: 1 0 0 -1julia> B = [0 1; 1 0]2×2 Matrix{Int64}: 0 1 1 0julia> F = eigen(A, B);julia> F.values2-element Vector{ComplexF64}: 0.0 - 1.0im 0.0 + 1.0imjulia> F.vectors2×2 Matrix{ComplexF64}: 0.0+1.0im 0.0-1.0im -1.0+0.0im -1.0-0.0imjulia> vals, vecs = F; # destructuring via iterationjulia> vals == F.values && vecs == F.vectorstrue
eigen(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> Eigen
Compute the eigenvalue decomposition ofA
, returning anEigen
factorization objectF
which contains the eigenvalues inF.values
and the eigenvectors in the columns of the matrixF.vectors
. (Thek
th eigenvector can be obtained from the sliceF.vectors[:, k]
.)
Iterating the decomposition produces the componentsF.values
andF.vectors
.
The following functions are available forEigen
objects:inv
,det
, andisposdef
.
TheUnitRange
irange
specifies indices of the sorted eigenvalues to search for.
Ifirange
is not1:n
, wheren
is the dimension ofA
, then the returned factorization will be atruncated factorization.
eigen(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> Eigen
Compute the eigenvalue decomposition ofA
, returning anEigen
factorization objectF
which contains the eigenvalues inF.values
and the eigenvectors in the columns of the matrixF.vectors
. (Thek
th eigenvector can be obtained from the sliceF.vectors[:, k]
.)
Iterating the decomposition produces the componentsF.values
andF.vectors
.
The following functions are available forEigen
objects:inv
,det
, andisposdef
.
vl
is the lower bound of the window of eigenvalues to search for, andvu
is the upper bound.
If [vl
,vu
] does not contain all eigenvalues ofA
, then the returned factorization will be atruncated factorization.
LinearAlgebra.eigen!
—Functioneigen!(A; permute, scale, sortby)eigen!(A, B; sortby)
Same aseigen
, but saves space by overwriting the inputA
(andB
), instead of creating a copy.
LinearAlgebra.Hessenberg
—TypeHessenberg <: Factorization
AHessenberg
object represents the Hessenberg factorizationQHQ'
of a square matrix, or a shiftQ(H+μI)Q'
thereof, which is produced by thehessenberg
function.
LinearAlgebra.hessenberg
—Functionhessenberg(A) -> Hessenberg
Compute the Hessenberg decomposition ofA
and return aHessenberg
object. IfF
is the factorization object, the unitary matrix can be accessed withF.Q
(of typeLinearAlgebra.HessenbergQ
) and the Hessenberg matrix withF.H
(of typeUpperHessenberg
), either of which may be converted to a regular matrix withMatrix(F.H)
orMatrix(F.Q)
.
IfA
isHermitian
or real-Symmetric
, then the Hessenberg decomposition produces a real-symmetric tridiagonal matrix andF.H
is of typeSymTridiagonal
.
Note that the shifted factorizationA+μI = Q (H+μI) Q'
can be constructed efficiently byF + μ*I
using theUniformScaling
objectI
, which creates a newHessenberg
object with shared storage and a modified shift. The shift of a givenF
is obtained byF.μ
. This is useful because multiple shifted solves(F + μ*I) \ b
(for differentμ
and/orb
) can be performed efficiently onceF
is created.
Iterating the decomposition produces the factorsF.Q, F.H, F.μ
.
Examples
julia> A = [4. 9. 7.; 4. 4. 1.; 4. 3. 2.]3×3 Matrix{Float64}: 4.0 9.0 7.0 4.0 4.0 1.0 4.0 3.0 2.0julia> F = hessenberg(A)Hessenberg{Float64, UpperHessenberg{Float64, Matrix{Float64}}, Matrix{Float64}, Vector{Float64}, Bool}Q factor: 3×3 LinearAlgebra.HessenbergQ{Float64, Matrix{Float64}, Vector{Float64}, false}H factor:3×3 UpperHessenberg{Float64, Matrix{Float64}}: 4.0 -11.3137 -1.41421 -5.65685 5.0 2.0 ⋅ -8.88178e-16 1.0julia> F.Q * F.H * F.Q'3×3 Matrix{Float64}: 4.0 9.0 7.0 4.0 4.0 1.0 4.0 3.0 2.0julia> q, h = F; # destructuring via iterationjulia> q == F.Q && h == F.Htrue
LinearAlgebra.hessenberg!
—Functionhessenberg!(A) -> Hessenberg
hessenberg!
is the same ashessenberg
, but saves space by overwriting the inputA
, instead of creating a copy.
LinearAlgebra.Schur
—TypeSchur <: Factorization
Matrix factorization type of the Schur factorization of a matrixA
. This is the return type ofschur(_)
, the corresponding matrix factorization function.
IfF::Schur
is the factorization object, the (quasi) triangular Schur factor can be obtained via eitherF.Schur
orF.T
and the orthogonal/unitary Schur vectors viaF.vectors
orF.Z
such thatA = F.vectors * F.Schur * F.vectors'
. The eigenvalues ofA
can be obtained withF.values
.
Iterating the decomposition produces the componentsF.T
,F.Z
, andF.values
.
Examples
julia> A = [5. 7.; -2. -4.]2×2 Matrix{Float64}: 5.0 7.0 -2.0 -4.0julia> F = schur(A)Schur{Float64, Matrix{Float64}, Vector{Float64}}T factor:2×2 Matrix{Float64}: 3.0 9.0 0.0 -2.0Z factor:2×2 Matrix{Float64}: 0.961524 0.274721 -0.274721 0.961524eigenvalues:2-element Vector{Float64}: 3.0 -2.0julia> F.vectors * F.Schur * F.vectors'2×2 Matrix{Float64}: 5.0 7.0 -2.0 -4.0julia> t, z, vals = F; # destructuring via iterationjulia> t == F.T && z == F.Z && vals == F.valuestrue
LinearAlgebra.GeneralizedSchur
—TypeGeneralizedSchur <: Factorization
Matrix factorization type of the generalized Schur factorization of two matricesA
andB
. This is the return type ofschur(_, _)
, the corresponding matrix factorization function.
IfF::GeneralizedSchur
is the factorization object, the (quasi) triangular Schur factors can be obtained viaF.S
andF.T
, the left unitary/orthogonal Schur vectors viaF.left
orF.Q
, and the right unitary/orthogonal Schur vectors can be obtained withF.right
orF.Z
such thatA=F.left*F.S*F.right'
andB=F.left*F.T*F.right'
. The generalized eigenvalues ofA
andB
can be obtained withF.α./F.β
.
Iterating the decomposition produces the componentsF.S
,F.T
,F.Q
,F.Z
,F.α
, andF.β
.
LinearAlgebra.schur
—Functionschur(A) -> F::Schur
Computes the Schur factorization of the matrixA
. The (quasi) triangular Schur factor can be obtained from theSchur
objectF
with eitherF.Schur
orF.T
and the orthogonal/unitary Schur vectors can be obtained withF.vectors
orF.Z
such thatA = F.vectors * F.Schur * F.vectors'
. The eigenvalues ofA
can be obtained withF.values
.
For realA
, the Schur factorization is "quasitriangular", which means that it is upper-triangular except with 2×2 diagonal blocks for any conjugate pair of complex eigenvalues; this allows the factorization to be purely real even when there are complex eigenvalues. To obtain the (complex) purely upper-triangular Schur factorization from a real quasitriangular factorization, you can useSchur{Complex}(schur(A))
.
Iterating the decomposition produces the componentsF.T
,F.Z
, andF.values
.
Examples
julia> A = [5. 7.; -2. -4.]2×2 Matrix{Float64}: 5.0 7.0 -2.0 -4.0julia> F = schur(A)Schur{Float64, Matrix{Float64}, Vector{Float64}}T factor:2×2 Matrix{Float64}: 3.0 9.0 0.0 -2.0Z factor:2×2 Matrix{Float64}: 0.961524 0.274721 -0.274721 0.961524eigenvalues:2-element Vector{Float64}: 3.0 -2.0julia> F.vectors * F.Schur * F.vectors'2×2 Matrix{Float64}: 5.0 7.0 -2.0 -4.0julia> t, z, vals = F; # destructuring via iterationjulia> t == F.T && z == F.Z && vals == F.valuestrue
schur(A, B) -> F::GeneralizedSchur
Computes the Generalized Schur (or QZ) factorization of the matricesA
andB
. The (quasi) triangular Schur factors can be obtained from theSchur
objectF
withF.S
andF.T
, the left unitary/orthogonal Schur vectors can be obtained withF.left
orF.Q
and the right unitary/orthogonal Schur vectors can be obtained withF.right
orF.Z
such thatA=F.left*F.S*F.right'
andB=F.left*F.T*F.right'
. The generalized eigenvalues ofA
andB
can be obtained withF.α./F.β
.
Iterating the decomposition produces the componentsF.S
,F.T
,F.Q
,F.Z
,F.α
, andF.β
.
LinearAlgebra.schur!
—Functionschur!(A) -> F::Schur
Same asschur
but uses the input argumentA
as workspace.
Examples
julia> A = [5. 7.; -2. -4.]2×2 Matrix{Float64}: 5.0 7.0 -2.0 -4.0julia> F = schur!(A)Schur{Float64, Matrix{Float64}, Vector{Float64}}T factor:2×2 Matrix{Float64}: 3.0 9.0 0.0 -2.0Z factor:2×2 Matrix{Float64}: 0.961524 0.274721 -0.274721 0.961524eigenvalues:2-element Vector{Float64}: 3.0 -2.0julia> A2×2 Matrix{Float64}: 3.0 9.0 0.0 -2.0
schur!(A::StridedMatrix, B::StridedMatrix) -> F::GeneralizedSchur
Same asschur
but uses the input matricesA
andB
as workspace.
LinearAlgebra.ordschur
—Functionordschur(F::Schur, select::Union{Vector{Bool},BitVector}) -> F::Schur
Reorders the Schur factorizationF
of a matrixA = Z*T*Z'
according to the logical arrayselect
returning the reordered factorizationF
object. The selected eigenvalues appear in the leading diagonal ofF.Schur
and the corresponding leading columns ofF.vectors
form an orthogonal/unitary basis of the corresponding right invariant subspace. In the real case, a complex conjugate pair of eigenvalues must be either both included or both excluded viaselect
.
ordschur(F::GeneralizedSchur, select::Union{Vector{Bool},BitVector}) -> F::GeneralizedSchur
Reorders the Generalized Schur factorizationF
of a matrix pair(A, B) = (Q*S*Z', Q*T*Z')
according to the logical arrayselect
and returns a GeneralizedSchur objectF
. The selected eigenvalues appear in the leading diagonal of bothF.S
andF.T
, and the left and right orthogonal/unitary Schur vectors are also reordered such that(A, B) = F.Q*(F.S, F.T)*F.Z'
still holds and the generalized eigenvalues ofA
andB
can still be obtained withF.α./F.β
.
LinearAlgebra.ordschur!
—Functionordschur!(F::Schur, select::Union{Vector{Bool},BitVector}) -> F::Schur
Same asordschur
but overwrites the factorizationF
.
ordschur!(F::GeneralizedSchur, select::Union{Vector{Bool},BitVector}) -> F::GeneralizedSchur
Same asordschur
but overwrites the factorizationF
.
LinearAlgebra.SVD
—TypeSVD <: Factorization
Matrix factorization type of the singular value decomposition (SVD) of a matrixA
. This is the return type ofsvd(_)
, the corresponding matrix factorization function.
IfF::SVD
is the factorization object,U
,S
,V
andVt
can be obtained viaF.U
,F.S
,F.V
andF.Vt
, such thatA = U * Diagonal(S) * Vt
. The singular values inS
are sorted in descending order.
Iterating the decomposition produces the componentsU
,S
, andV
.
Examples
julia> A = [1. 0. 0. 0. 2.; 0. 0. 3. 0. 0.; 0. 0. 0. 0. 0.; 0. 2. 0. 0. 0.]4×5 Matrix{Float64}: 1.0 0.0 0.0 0.0 2.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0julia> F = svd(A)SVD{Float64, Float64, Matrix{Float64}, Vector{Float64}}U factor:4×4 Matrix{Float64}: 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 -1.0 0.0singular values:4-element Vector{Float64}: 3.0 2.23606797749979 2.0 0.0Vt factor:4×5 Matrix{Float64}: -0.0 0.0 1.0 -0.0 0.0 0.447214 0.0 0.0 0.0 0.894427 0.0 -1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0julia> F.U * Diagonal(F.S) * F.Vt4×5 Matrix{Float64}: 1.0 0.0 0.0 0.0 2.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0julia> u, s, v = F; # destructuring via iterationjulia> u == F.U && s == F.S && v == F.Vtrue
LinearAlgebra.GeneralizedSVD
—TypeGeneralizedSVD <: Factorization
Matrix factorization type of the generalized singular value decomposition (SVD) of two matricesA
andB
, such thatA = F.U*F.D1*F.R0*F.Q'
andB = F.V*F.D2*F.R0*F.Q'
. This is the return type ofsvd(_, _)
, the corresponding matrix factorization function.
For an M-by-N matrixA
and P-by-N matrixB
,
U
is a M-by-M orthogonal matrix,V
is a P-by-P orthogonal matrix,Q
is a N-by-N orthogonal matrix,D1
is a M-by-(K+L) diagonal matrix with 1s in the first K entries,D2
is a P-by-(K+L) matrix whose top right L-by-L block is diagonal,R0
is a (K+L)-by-N matrix whose rightmost (K+L)-by-(K+L) block is nonsingular upper block triangular,K+L
is the effective numerical rank of the matrix[A; B]
.
Iterating the decomposition produces the componentsU
,V
,Q
,D1
,D2
, andR0
.
The entries ofF.D1
andF.D2
are related, as explained in the LAPACK documentation for thegeneralized SVD and thexGGSVD3 routine which is called underneath (in LAPACK 3.6.0 and newer).
Examples
julia> A = [1. 0.; 0. -1.]2×2 Matrix{Float64}: 1.0 0.0 0.0 -1.0julia> B = [0. 1.; 1. 0.]2×2 Matrix{Float64}: 0.0 1.0 1.0 0.0julia> F = svd(A, B)GeneralizedSVD{Float64, Matrix{Float64}, Float64, Vector{Float64}}U factor:2×2 Matrix{Float64}: 1.0 0.0 0.0 1.0V factor:2×2 Matrix{Float64}: -0.0 -1.0 1.0 0.0Q factor:2×2 Matrix{Float64}: 1.0 0.0 0.0 1.0D1 factor:2×2 Matrix{Float64}: 0.707107 0.0 0.0 0.707107D2 factor:2×2 Matrix{Float64}: 0.707107 0.0 0.0 0.707107R0 factor:2×2 Matrix{Float64}: 1.41421 0.0 0.0 -1.41421julia> F.U*F.D1*F.R0*F.Q'2×2 Matrix{Float64}: 1.0 0.0 0.0 -1.0julia> F.V*F.D2*F.R0*F.Q'2×2 Matrix{Float64}: -0.0 1.0 1.0 0.0
LinearAlgebra.svd
—Functionsvd(A; full::Bool = false, alg::Algorithm = default_svd_alg(A)) -> SVD
Compute the singular value decomposition (SVD) ofA
and return anSVD
object.
U
,S
,V
andVt
can be obtained from the factorizationF
withF.U
,F.S
,F.V
andF.Vt
, such thatA = U * Diagonal(S) * Vt
. The algorithm producesVt
and henceVt
is more efficient to extract thanV
. The singular values inS
are sorted in descending order.
Iterating the decomposition produces the componentsU
,S
, andV
.
Iffull = false
(default), a "thin" SVD is returned. For an$M \times N$ matrixA
, in the full factorizationU
is$M \times M$ andV
is$N \times N$, while in the thin factorizationU
is$M \times K$ andV
is$N \times K$, where$K = \min(M,N)$ is the number of singular values.
Ifalg = DivideAndConquer()
a divide-and-conquer algorithm is used to calculate the SVD. Another (typically slower but more accurate) option isalg = QRIteration()
.
Thealg
keyword argument requires Julia 1.3 or later.
Examples
julia> A = rand(4,3);julia> F = svd(A); # Store the Factorization Objectjulia> A ≈ F.U * Diagonal(F.S) * F.Vttruejulia> U, S, V = F; # destructuring via iterationjulia> A ≈ U * Diagonal(S) * V'truejulia> Uonly, = svd(A); # Store U onlyjulia> Uonly == Utrue
svd(A, B) -> GeneralizedSVD
Compute the generalized SVD ofA
andB
, returning aGeneralizedSVD
factorization objectF
such that[A;B] = [F.U * F.D1; F.V * F.D2] * F.R0 * F.Q'
U
is a M-by-M orthogonal matrix,V
is a P-by-P orthogonal matrix,Q
is a N-by-N orthogonal matrix,D1
is a M-by-(K+L) diagonal matrix with 1s in the first K entries,D2
is a P-by-(K+L) matrix whose top right L-by-L block is diagonal,R0
is a (K+L)-by-N matrix whose rightmost (K+L)-by-(K+L) block is nonsingular upper block triangular,K+L
is the effective numerical rank of the matrix[A; B]
.
Iterating the decomposition produces the componentsU
,V
,Q
,D1
,D2
, andR0
.
The generalized SVD is used in applications such as when one wants to compare how much belongs toA
vs. how much belongs toB
, as in human vs yeast genome, or signal vs noise, or between clusters vs within clusters. (See Edelman and Wang for discussion: https://arxiv.org/abs/1901.00485)
It decomposes[A; B]
into[UC; VS]H
, where[UC; VS]
is a natural orthogonal basis for the column space of[A; B]
, andH = RQ'
is a natural non-orthogonal basis for the rowspace of[A;B]
, where the top rows are most closely attributed to theA
matrix, and the bottom to theB
matrix. The multi-cosine/sine matricesC
andS
provide a multi-measure of how muchA
vs how muchB
, andU
andV
provide directions in which these are measured.
Examples
julia> A = randn(3,2); B=randn(4,2);julia> F = svd(A, B);julia> U,V,Q,C,S,R = F;julia> H = R*Q';julia> [A; B] ≈ [U*C; V*S]*Htruejulia> [A; B] ≈ [F.U*F.D1; F.V*F.D2]*F.R0*F.Q'truejulia> Uonly, = svd(A,B);julia> U == Uonlytrue
LinearAlgebra.svd!
—FunctionLinearAlgebra.svdvals
—Functionsvdvals(A)
Return the singular values ofA
in descending order.
Examples
julia> A = [1. 0. 0. 0. 2.; 0. 0. 3. 0. 0.; 0. 0. 0. 0. 0.; 0. 2. 0. 0. 0.]4×5 Matrix{Float64}: 1.0 0.0 0.0 0.0 2.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0julia> svdvals(A)4-element Vector{Float64}: 3.0 2.23606797749979 2.0 0.0
svdvals(A, B)
Return the generalized singular values from the generalized singular value decomposition ofA
andB
. See alsosvd
.
Examples
julia> A = [1. 0.; 0. -1.]2×2 Matrix{Float64}: 1.0 0.0 0.0 -1.0julia> B = [0. 1.; 1. 0.]2×2 Matrix{Float64}: 0.0 1.0 1.0 0.0julia> svdvals(A, B)2-element Vector{Float64}: 1.0 1.0
LinearAlgebra.svdvals!
—FunctionLinearAlgebra.Givens
—TypeLinearAlgebra.Givens(i1,i2,c,s) -> G
A Givens rotation linear operator. The fieldsc
ands
represent the cosine and sine of the rotation angle, respectively. TheGivens
type supports left multiplicationG*A
and conjugated transpose right multiplicationA*G'
. The type doesn't have asize
and can therefore be multiplied with matrices of arbitrary size as long asi2<=size(A,2)
forG*A
ori2<=size(A,1)
forA*G'
.
See alsogivens
.
LinearAlgebra.givens
—Functiongivens(f::T, g::T, i1::Integer, i2::Integer) where {T} -> (G::Givens, r::T)
Computes the Givens rotationG
and scalarr
such that for any vectorx
where
x[i1] = fx[i2] = g
the result of the multiplication
y = G*x
has the property that
y[i1] = ry[i2] = 0
See alsoLinearAlgebra.Givens
.
givens(A::AbstractArray, i1::Integer, i2::Integer, j::Integer) -> (G::Givens, r)
Computes the Givens rotationG
and scalarr
such that the result of the multiplication
B = G*A
has the property that
B[i1,j] = rB[i2,j] = 0
See alsoLinearAlgebra.Givens
.
givens(x::AbstractVector, i1::Integer, i2::Integer) -> (G::Givens, r)
Computes the Givens rotationG
and scalarr
such that the result of the multiplication
B = G*x
has the property that
B[i1] = rB[i2] = 0
See alsoLinearAlgebra.Givens
.
LinearAlgebra.triu
—Functiontriu(M)
Upper triangle of a matrix.
Examples
julia> a = fill(1.0, (4,4))4×4 Matrix{Float64}: 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0julia> triu(a)4×4 Matrix{Float64}: 1.0 1.0 1.0 1.0 0.0 1.0 1.0 1.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 1.0
triu(M, k::Integer)
Return the upper triangle ofM
starting from thek
th superdiagonal.
Examples
julia> a = fill(1.0, (4,4))4×4 Matrix{Float64}: 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0julia> triu(a,3)4×4 Matrix{Float64}: 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0julia> triu(a,-3)4×4 Matrix{Float64}: 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
LinearAlgebra.triu!
—Functiontriu!(M)
Upper triangle of a matrix, overwritingM
in the process. See alsotriu
.
triu!(M, k::Integer)
Return the upper triangle ofM
starting from thek
th superdiagonal, overwritingM
in the process.
Examples
julia> M = [1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5]5×5 Matrix{Int64}: 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5julia> triu!(M, 1)5×5 Matrix{Int64}: 0 2 3 4 5 0 0 3 4 5 0 0 0 4 5 0 0 0 0 5 0 0 0 0 0
LinearAlgebra.tril
—Functiontril(M)
Lower triangle of a matrix.
Examples
julia> a = fill(1.0, (4,4))4×4 Matrix{Float64}: 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0julia> tril(a)4×4 Matrix{Float64}: 1.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 1.0 1.0 1.0 0.0 1.0 1.0 1.0 1.0
tril(M, k::Integer)
Return the lower triangle ofM
starting from thek
th superdiagonal.
Examples
julia> a = fill(1.0, (4,4))4×4 Matrix{Float64}: 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0julia> tril(a,3)4×4 Matrix{Float64}: 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0julia> tril(a,-3)4×4 Matrix{Float64}: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
LinearAlgebra.tril!
—Functiontril!(M)
Lower triangle of a matrix, overwritingM
in the process. See alsotril
.
tril!(M, k::Integer)
Return the lower triangle ofM
starting from thek
th superdiagonal, overwritingM
in the process.
Examples
julia> M = [1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5]5×5 Matrix{Int64}: 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5julia> tril!(M, 2)5×5 Matrix{Int64}: 1 2 3 0 0 1 2 3 4 0 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
LinearAlgebra.diagind
—Functiondiagind(M::AbstractMatrix, k::Integer = 0, indstyle::IndexStyle = IndexLinear())diagind(M::AbstractMatrix, indstyle::IndexStyle = IndexLinear())
AnAbstractRange
giving the indices of thek
th diagonal of the matrixM
. Optionally, an index style may be specified which determines the type of the range returned. Ifindstyle isa IndexLinear
(default), this returns anAbstractRange{Integer}
. On the other hand, ifindstyle isa IndexCartesian
, this returns anAbstractRange{CartesianIndex{2}}
.
Ifk
is not provided, it is assumed to be0
(corresponding to the main diagonal).
Examples
julia> A = [1 2 3; 4 5 6; 7 8 9]3×3 Matrix{Int64}: 1 2 3 4 5 6 7 8 9julia> diagind(A, -1)2:4:6julia> diagind(A, IndexCartesian())StepRangeLen(CartesianIndex(1, 1), CartesianIndex(1, 1), 3)
Specifying anIndexStyle
requires at least Julia 1.11.
LinearAlgebra.diag
—FunctionLinearAlgebra.diagm
—Functiondiagm(kv::Pair{<:Integer,<:AbstractVector}...)diagm(m::Integer, n::Integer, kv::Pair{<:Integer,<:AbstractVector}...)
Construct a matrix fromPair
s of diagonals and vectors. Vectorkv.second
will be placed on thekv.first
diagonal. By default the matrix is square and its size is inferred fromkv
, but a non-square sizem
×n
(padded with zeros as needed) can be specified by passingm,n
as the first arguments. For repeated diagonal indiceskv.first
the values in the corresponding vectorskv.second
will be added.
diagm
constructs a full matrix; if you want storage-efficient versions with fast arithmetic, seeDiagonal
,Bidiagonal
Tridiagonal
andSymTridiagonal
.
Examples
julia> diagm(1 => [1,2,3])4×4 Matrix{Int64}: 0 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0julia> diagm(1 => [1,2,3], -1 => [4,5])4×4 Matrix{Int64}: 0 1 0 0 4 0 2 0 0 5 0 3 0 0 0 0julia> diagm(1 => [1,2,3], 1 => [1,2,3])4×4 Matrix{Int64}: 0 2 0 0 0 0 4 0 0 0 0 6 0 0 0 0
diagm(v::AbstractVector)diagm(m::Integer, n::Integer, v::AbstractVector)
Construct a matrix with elements of the vector as diagonal elements. By default, the matrix is square and its size is given bylength(v)
, but a non-square sizem
×n
can be specified by passingm,n
as the first arguments.
Examples
julia> diagm([1,2,3])3×3 Matrix{Int64}: 1 0 0 0 2 0 0 0 3
LinearAlgebra.rank
—Functionrank(::QRSparse{Tv,Ti}) -> Ti
Return the rank of the QR factorization
rank(S::SparseMatrixCSC{Tv,Ti}; [tol::Real]) -> Ti
Calculate rank ofS
by calculating its QR factorization. Values smaller thantol
are considered as zero. See SPQR's manual.
rank(A::AbstractMatrix; atol::Real=0, rtol::Real=atol>0 ? 0 : n*ϵ)rank(A::AbstractMatrix, rtol::Real)
Compute the numerical rank of a matrix by counting how many outputs ofsvdvals(A)
are greater thanmax(atol, rtol*σ₁)
whereσ₁
isA
's largest calculated singular value.atol
andrtol
are the absolute and relative tolerances, respectively. The default relative tolerance isn*ϵ
, wheren
is the size of the smallest dimension ofA
, andϵ
is theeps
of the element type ofA
.
Numerical rank can be a sensitive and imprecise characterization of ill-conditioned matrices with singular values that are close to the threshold tolerancemax(atol, rtol*σ₁)
. In such cases, slight perturbations to the singular-value computation or to the matrix can change the result ofrank
by pushing one or more singular values across the threshold. These variations can even occur due to changes in floating-point errors between different Julia versions, architectures, compilers, or operating systems.
Theatol
andrtol
keyword arguments requires at least Julia 1.1. In Julia 1.0rtol
is available as a positional argument, but this will be deprecated in Julia 2.0.
Examples
julia> rank(Matrix(I, 3, 3))3julia> rank(diagm(0 => [1, 0, 2]))2julia> rank(diagm(0 => [1, 0.001, 2]), rtol=0.1)2julia> rank(diagm(0 => [1, 0.001, 2]), rtol=0.00001)3julia> rank(diagm(0 => [1, 0.001, 2]), atol=1.5)1
LinearAlgebra.norm
—Functionnorm(A, p::Real=2)
For any iterable containerA
(including arrays of any dimension) of numbers (or any element type for whichnorm
is defined), compute thep
-norm (defaulting top=2
) as ifA
were a vector of the corresponding length.
Thep
-norm is defined as
\[\|A\|_p = \left( \sum_{i=1}^n | a_i | ^p \right)^{1/p}\]
with$a_i$ the entries of$A$,$| a_i |$ thenorm
of$a_i$, and$n$ the length of$A$. Since thep
-norm is computed using thenorm
s of the entries ofA
, thep
-norm of a vector of vectors is not compatible with the interpretation of it as a block vector in general ifp != 2
.
p
can assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular,norm(A, Inf)
returns the largest value inabs.(A)
, whereasnorm(A, -Inf)
returns the smallest. IfA
is a matrix andp=2
, then this is equivalent to the Frobenius norm.
The second argumentp
is not necessarily a part of the interface fornorm
, i.e. a custom type may only implementnorm(A)
without second argument.
Useopnorm
to compute the operator norm of a matrix.
Examples
julia> v = [3, -2, 6]3-element Vector{Int64}: 3 -2 6julia> norm(v)7.0julia> norm(v, 1)11.0julia> norm(v, Inf)6.0julia> norm([1 2 3; 4 5 6; 7 8 9])16.881943016134134julia> norm([1 2 3 4 5 6 7 8 9])16.881943016134134julia> norm(1:9)16.881943016134134julia> norm(hcat(v,v), 1) == norm(vcat(v,v), 1) != norm([v,v], 1)truejulia> norm(hcat(v,v), 2) == norm(vcat(v,v), 2) == norm([v,v], 2)truejulia> norm(hcat(v,v), Inf) == norm(vcat(v,v), Inf) != norm([v,v], Inf)true
norm(x::Number, p::Real=2)
For numbers, return$\left( |x|^p \right)^{1/p}$.
Examples
julia> norm(2, 1)2.0julia> norm(-2, 1)2.0julia> norm(2, 2)2.0julia> norm(-2, 2)2.0julia> norm(2, Inf)2.0julia> norm(-2, Inf)2.0
LinearAlgebra.opnorm
—Functionopnorm(A::AbstractMatrix, p::Real=2)
Compute the operator norm (or matrix norm) induced by the vectorp
-norm, where valid values ofp
are1
,2
, orInf
. (Note that for sparse matrices,p=2
is currently not implemented.) Usenorm
to compute the Frobenius norm.
Whenp=1
, the operator norm is the maximum absolute column sum ofA
:
\[\|A\|_1 = \max_{1 ≤ j ≤ n} \sum_{i=1}^m | a_{ij} |\]
with$a_{ij}$ the entries of$A$, and$m$ and$n$ its dimensions.
Whenp=2
, the operator norm is the spectral norm, equal to the largest singular value ofA
.
Whenp=Inf
, the operator norm is the maximum absolute row sum ofA
:
\[\|A\|_\infty = \max_{1 ≤ i ≤ m} \sum _{j=1}^n | a_{ij} |\]
Examples
julia> A = [1 -2 -3; 2 3 -1]2×3 Matrix{Int64}: 1 -2 -3 2 3 -1julia> opnorm(A, Inf)6.0julia> opnorm(A, 1)5.0
opnorm(x::Number, p::Real=2)
For numbers, return$\left( |x|^p \right)^{1/p}$. This is equivalent tonorm
.
opnorm(A::Adjoint{<:Any,<:AbstractVector}, q::Real=2)opnorm(A::Transpose{<:Any,<:AbstractVector}, q::Real=2)
For Adjoint/Transpose-wrapped vectors, return the operator$q$-norm ofA
, which is equivalent to thep
-norm with valuep = q/(q-1)
. They coincide atp = q = 2
. Usenorm
to compute thep
norm ofA
as a vector.
The difference in norm between a vector space and its dual arises to preserve the relationship between duality and the dot product, and the result is consistent with the operatorp
-norm of a1 × n
matrix.
Examples
julia> v = [1; im];julia> vc = v';julia> opnorm(vc, 1)1.0julia> norm(vc, 1)2.0julia> norm(v, 1)2.0julia> opnorm(vc, 2)1.4142135623730951julia> norm(vc, 2)1.4142135623730951julia> norm(v, 2)1.4142135623730951julia> opnorm(vc, Inf)2.0julia> norm(vc, Inf)1.0julia> norm(v, Inf)1.0
LinearAlgebra.normalize!
—FunctionLinearAlgebra.normalize
—Functionnormalize(a, p::Real=2)
Normalizea
so that itsp
-norm equals unity, i.e.norm(a, p) == 1
. For scalars, this is similar to sign(a), except normalize(0) = NaN. See alsonormalize!
,norm
, andsign
.
Examples
julia> a = [1,2,4];julia> b = normalize(a)3-element Vector{Float64}: 0.2182178902359924 0.4364357804719848 0.8728715609439696julia> norm(b)1.0julia> c = normalize(a, 1)3-element Vector{Float64}: 0.14285714285714285 0.2857142857142857 0.5714285714285714julia> norm(c, 1)1.0julia> a = [1 2 4 ; 1 2 4]2×3 Matrix{Int64}: 1 2 4 1 2 4julia> norm(a)6.48074069840786julia> normalize(a)2×3 Matrix{Float64}: 0.154303 0.308607 0.617213 0.154303 0.308607 0.617213julia> normalize(3, 1)1.0julia> normalize(-8, 1)-1.0julia> normalize(0, 1)NaN
LinearAlgebra.cond
—Functioncond(M, p::Real=2)
Condition number of the matrixM
, computed using the operatorp
-norm. Valid values forp
are1
,2
(default), orInf
.
LinearAlgebra.condskeel
—Functioncondskeel(M, [x, p::Real=Inf])
\[\kappa_S(M, p) = \left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \right\Vert_p \\\kappa_S(M, x, p) = \frac{\left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \left\vert x \right\vert \right\Vert_p}{\left \Vert x \right \Vert_p}\]
Skeel condition number$\kappa_S$ of the matrixM
, optionally with respect to the vectorx
, as computed using the operatorp
-norm.$\left\vert M \right\vert$ denotes the matrix of (entry wise) absolute values of$M$;$\left\vert M \right\vert_{ij} = \left\vert M_{ij} \right\vert$. Valid values forp
are1
,2
andInf
(default).
This quantity is also known in the literature as the Bauer condition number, relative condition number, or componentwise relative condition number.
LinearAlgebra.tr
—Functiontr(M)
Matrix trace. Sums the diagonal elements ofM
.
Examples
julia> A = [1 2; 3 4]2×2 Matrix{Int64}: 1 2 3 4julia> tr(A)5
LinearAlgebra.det
—FunctionLinearAlgebra.logdet
—Functionlogdet(M)
Logarithm of matrix determinant. Equivalent tolog(det(M))
, but may provide increased accuracy and avoids overflow/underflow.
Examples
julia> M = [1 0; 2 2]2×2 Matrix{Int64}: 1 0 2 2julia> logdet(M)0.6931471805599453julia> logdet(Matrix(I, 3, 3))0.0
LinearAlgebra.logabsdet
—Functionlogabsdet(M)
Log of absolute value of matrix determinant. Equivalent to(log(abs(det(M))), sign(det(M)))
, but may provide increased accuracy and/or speed.
Examples
julia> A = [-1. 0.; 0. 1.]2×2 Matrix{Float64}: -1.0 0.0 0.0 1.0julia> det(A)-1.0julia> logabsdet(A)(0.0, -1.0)julia> B = [2. 0.; 0. 1.]2×2 Matrix{Float64}: 2.0 0.0 0.0 1.0julia> det(B)2.0julia> logabsdet(B)(0.6931471805599453, 1.0)
Base.inv
—Methodinv(M)
Matrix inverse. Computes matrixN
such thatM * N = I
, whereI
is the identity matrix. Computed by solving the left-divisionN = M \ I
.
Examples
julia> M = [2 5; 1 3]2×2 Matrix{Int64}: 2 5 1 3julia> N = inv(M)2×2 Matrix{Float64}: 3.0 -5.0 -1.0 2.0julia> M*N == N*M == Matrix(I, 2, 2)true
LinearAlgebra.pinv
—Functionpinv(M; atol::Real=0, rtol::Real=atol>0 ? 0 : n*ϵ)pinv(M, rtol::Real) = pinv(M; rtol=rtol) # to be deprecated in Julia 2.0
Computes the Moore-Penrose pseudoinverse.
For matricesM
with floating point elements, it is convenient to compute the pseudoinverse by inverting only singular values greater thanmax(atol, rtol*σ₁)
whereσ₁
is the largest singular value ofM
.
The optimal choice of absolute (atol
) and relative tolerance (rtol
) varies both with the value ofM
and the intended application of the pseudoinverse. The default relative tolerance isn*ϵ
, wheren
is the size of the smallest dimension ofM
, andϵ
is theeps
of the element type ofM
.
For inverting dense ill-conditioned matrices in a least-squares sense,rtol = sqrt(eps(real(float(oneunit(eltype(M))))))
is recommended.
For more information, see[issue8859],[B96],[S84],[KY88].
Examples
julia> M = [1.5 1.3; 1.2 1.9]2×2 Matrix{Float64}: 1.5 1.3 1.2 1.9julia> N = pinv(M)2×2 Matrix{Float64}: 1.47287 -1.00775 -0.930233 1.16279julia> M * N2×2 Matrix{Float64}: 1.0 -2.22045e-16 4.44089e-16 1.0
LinearAlgebra.nullspace
—Functionnullspace(M; atol::Real=0, rtol::Real=atol>0 ? 0 : n*ϵ)nullspace(M, rtol::Real) = nullspace(M; rtol=rtol) # to be deprecated in Julia 2.0
Computes a basis for the nullspace ofM
by including the singular vectors ofM
whose singular values have magnitudes smaller thanmax(atol, rtol*σ₁)
, whereσ₁
isM
's largest singular value.
By default, the relative tolerancertol
isn*ϵ
, wheren
is the size of the smallest dimension ofM
, andϵ
is theeps
of the element type ofM
.
Examples
julia> M = [1 0 0; 0 1 0; 0 0 0]3×3 Matrix{Int64}: 1 0 0 0 1 0 0 0 0julia> nullspace(M)3×1 Matrix{Float64}: 0.0 0.0 1.0julia> nullspace(M, rtol=3)3×3 Matrix{Float64}: 0.0 1.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0julia> nullspace(M, atol=0.95)3×1 Matrix{Float64}: 0.0 0.0 1.0
Base.kron
—Functionkron(A, B)
Computes the Kronecker product of two vectors, matrices or numbers.
For real vectorsv
andw
, the Kronecker product is related to the outer product bykron(v,w) == vec(w * transpose(v))
orw * transpose(v) == reshape(kron(v,w), (length(w), length(v)))
. Note how the ordering ofv
andw
differs on the left and right of these expressions (due to column-major storage). For complex vectors, the outer productw * v'
also differs by conjugation ofv
.
Examples
julia> A = [1 2; 3 4]2×2 Matrix{Int64}: 1 2 3 4julia> B = [im 1; 1 -im]2×2 Matrix{Complex{Int64}}: 0+1im 1+0im 1+0im 0-1imjulia> kron(A, B)4×4 Matrix{Complex{Int64}}: 0+1im 1+0im 0+2im 2+0im 1+0im 0-1im 2+0im 0-2im 0+3im 3+0im 0+4im 4+0im 3+0im 0-3im 4+0im 0-4imjulia> v = [1, 2]; w = [3, 4, 5];julia> w*transpose(v)3×2 Matrix{Int64}: 3 6 4 8 5 10julia> reshape(kron(v,w), (length(w), length(v)))3×2 Matrix{Int64}: 3 6 4 8 5 10
Base.kron!
—Functionkron!(C, A, B)
Computes the Kronecker product ofA
andB
and stores the result inC
, overwriting the existing content ofC
. This is the in-place version ofkron
.
This function requires Julia 1.6 or later.
Base.exp
—Methodexp(A::AbstractMatrix)
Compute the matrix exponential ofA
, defined by
\[e^A = \sum_{n=0}^{\infty} \frac{A^n}{n!}.\]
For symmetric or HermitianA
, an eigendecomposition (eigen
) is used, otherwise the scaling and squaring algorithm (see[H05]) is chosen.
Examples
julia> A = Matrix(1.0I, 2, 2)2×2 Matrix{Float64}: 1.0 0.0 0.0 1.0julia> exp(A)2×2 Matrix{Float64}: 2.71828 0.0 0.0 2.71828
Base.cis
—MethodBase.:^
—Method^(A::AbstractMatrix, p::Number)
Matrix power, equivalent to$\exp(p\log(A))$
Examples
julia> [1 2; 0 3]^32×2 Matrix{Int64}: 1 26 0 27
Base.:^
—Method^(b::Number, A::AbstractMatrix)
Matrix exponential, equivalent to$\exp(\log(b)A)$.
Support for raisingIrrational
numbers (likeℯ
) to a matrix was added in Julia 1.1.
Examples
julia> 2^[1 2; 0 3]2×2 Matrix{Float64}: 2.0 6.0 0.0 8.0julia> ℯ^[1 2; 0 3]2×2 Matrix{Float64}: 2.71828 17.3673 0.0 20.0855
Base.log
—Methodlog(A::AbstractMatrix)
IfA
has no negative real eigenvalue, compute the principal matrix logarithm ofA
, i.e. the unique matrix$X$ such that$e^X = A$ and$-\pi < Im(\lambda) < \pi$ for all the eigenvalues$\lambda$ of$X$. IfA
has nonpositive eigenvalues, a nonprincipal matrix function is returned whenever possible.
IfA
is symmetric or Hermitian, its eigendecomposition (eigen
) is used, ifA
is triangular an improved version of the inverse scaling and squaring method is employed (see[AH12] and[AHR13]). IfA
is real with no negative eigenvalues, then the real Schur form is computed. Otherwise, the complex Schur form is computed. Then the upper (quasi-)triangular algorithm in[AHR13] is used on the upper (quasi-)triangular factor.
Examples
julia> A = Matrix(2.7182818*I, 2, 2)2×2 Matrix{Float64}: 2.71828 0.0 0.0 2.71828julia> log(A)2×2 Matrix{Float64}: 1.0 0.0 0.0 1.0
Base.sqrt
—Methodsqrt(x)
Return$\sqrt{x}$.
ThrowsDomainError
for negativeReal
arguments. Use complex negative arguments instead. Note thatsqrt
has a branch cut along the negative real axis.
The prefix operator√
is equivalent tosqrt
.
See also:hypot
.
Examples
julia> sqrt(big(81))9.0julia> sqrt(big(-81))ERROR: DomainError with -81.0:NaN result for non-NaN input.Stacktrace: [1] sqrt(::BigFloat) at ./mpfr.jl:501[...]julia> sqrt(big(complex(-81)))0.0 + 9.0imjulia> sqrt(-81 - 0.0im) # -0.0im is below the branch cut0.0 - 9.0imjulia> .√(1:4)4-element Vector{Float64}: 1.0 1.4142135623730951 1.7320508075688772 2.0
sqrt(A::AbstractMatrix)
IfA
has no negative real eigenvalues, compute the principal matrix square root ofA
, that is the unique matrix$X$ with eigenvalues having positive real part such that$X^2 = A$. Otherwise, a nonprincipal square root is returned.
IfA
is real-symmetric or Hermitian, its eigendecomposition (eigen
) is used to compute the square root. For such matrices, eigenvalues λ that appear to be slightly negative due to roundoff errors are treated as if they were zero. More precisely, matrices with all eigenvalues≥ -rtol*(max |λ|)
are treated as semidefinite (yielding a Hermitian square root), with negative eigenvalues taken to be zero.rtol
is a keyword argument tosqrt
(in the Hermitian/real-symmetric case only) that defaults to machine precision scaled bysize(A,1)
.
Otherwise, the square root is determined by means of the Björck-Hammarling method[BH83], which computes the complex Schur form (schur
) and then the complex square root of the triangular factor. If a real square root exists, then an extension of this method[H87] that computes the real Schur form and then the real square root of the quasi-triangular factor is instead used.
Examples
julia> A = [4 0; 0 4]2×2 Matrix{Int64}: 4 0 0 4julia> sqrt(A)2×2 Matrix{Float64}: 2.0 0.0 0.0 2.0
Base.Math.cbrt
—Methodcbrt(A::AbstractMatrix{<:Real})
Computes the real-valued cube root of a real-valued matrixA
. IfT = cbrt(A)
, then we haveT*T*T ≈ A
, see example given below.
IfA
is symmetric, i.e., of typeHermOrSym{<:Real}
, then (eigen
) is used to find the cube root. Otherwise, a specialized version of the p-th root algorithm[S03] is utilized, which exploits the real-valued Schur decomposition (schur
) to compute the cube root.
Examples
julia> A = [0.927524 -0.15857; -1.3677 -1.01172]2×2 Matrix{Float64}: 0.927524 -0.15857 -1.3677 -1.01172julia> T = cbrt(A)2×2 Matrix{Float64}: 0.910077 -0.151019 -1.30257 -0.936818julia> T*T*T ≈ Atrue
Base.cos
—Methodcos(A::AbstractMatrix)
Compute the matrix cosine of a square matrixA
.
IfA
is symmetric or Hermitian, its eigendecomposition (eigen
) is used to compute the cosine. Otherwise, the cosine is determined by callingexp
.
Examples
julia> cos(fill(1.0, (2,2)))2×2 Matrix{Float64}: 0.291927 -0.708073 -0.708073 0.291927
Base.sin
—MethodBase.Math.sincos
—Methodsincos(A::AbstractMatrix)
Compute the matrix sine and cosine of a square matrixA
.
Examples
julia> S, C = sincos(fill(1.0, (2,2)));julia> S2×2 Matrix{Float64}: 0.454649 0.454649 0.454649 0.454649julia> C2×2 Matrix{Float64}: 0.291927 -0.708073 -0.708073 0.291927
Base.tan
—Methodtan(A::AbstractMatrix)
Compute the matrix tangent of a square matrixA
.
IfA
is symmetric or Hermitian, its eigendecomposition (eigen
) is used to compute the tangent. Otherwise, the tangent is determined by callingexp
.
Examples
julia> tan(fill(1.0, (2,2)))2×2 Matrix{Float64}: -1.09252 -1.09252 -1.09252 -1.09252
Base.Math.sec
—Methodsec(A::AbstractMatrix)
Compute the matrix secant of a square matrixA
.
Base.Math.csc
—Methodcsc(A::AbstractMatrix)
Compute the matrix cosecant of a square matrixA
.
Base.Math.cot
—Methodcot(A::AbstractMatrix)
Compute the matrix cotangent of a square matrixA
.
Base.cosh
—Methodcosh(A::AbstractMatrix)
Compute the matrix hyperbolic cosine of a square matrixA
.
Base.sinh
—Methodsinh(A::AbstractMatrix)
Compute the matrix hyperbolic sine of a square matrixA
.
Base.tanh
—Methodtanh(A::AbstractMatrix)
Compute the matrix hyperbolic tangent of a square matrixA
.
Base.Math.sech
—Methodsech(A::AbstractMatrix)
Compute the matrix hyperbolic secant of square matrixA
.
Base.Math.csch
—Methodcsch(A::AbstractMatrix)
Compute the matrix hyperbolic cosecant of square matrixA
.
Base.Math.coth
—Methodcoth(A::AbstractMatrix)
Compute the matrix hyperbolic cotangent of square matrixA
.
Base.acos
—Methodacos(A::AbstractMatrix)
Compute the inverse matrix cosine of a square matrixA
.
IfA
is symmetric or Hermitian, its eigendecomposition (eigen
) is used to compute the inverse cosine. Otherwise, the inverse cosine is determined by usinglog
andsqrt
. For the theory and logarithmic formulas used to compute this function, see[AH16_1].
Examples
julia> acos(cos([0.5 0.1; -0.2 0.3]))2×2 Matrix{ComplexF64}: 0.5-8.32667e-17im 0.1+0.0im -0.2+2.63678e-16im 0.3-3.46945e-16im
Base.asin
—Methodasin(A::AbstractMatrix)
Compute the inverse matrix sine of a square matrixA
.
IfA
is symmetric or Hermitian, its eigendecomposition (eigen
) is used to compute the inverse sine. Otherwise, the inverse sine is determined by usinglog
andsqrt
. For the theory and logarithmic formulas used to compute this function, see[AH16_2].
Examples
julia> asin(sin([0.5 0.1; -0.2 0.3]))2×2 Matrix{ComplexF64}: 0.5-4.16334e-17im 0.1-5.55112e-17im -0.2+9.71445e-17im 0.3-1.249e-16im
Base.atan
—Methodatan(A::AbstractMatrix)
Compute the inverse matrix tangent of a square matrixA
.
IfA
is symmetric or Hermitian, its eigendecomposition (eigen
) is used to compute the inverse tangent. Otherwise, the inverse tangent is determined by usinglog
. For the theory and logarithmic formulas used to compute this function, see[AH16_3].
Examples
julia> atan(tan([0.5 0.1; -0.2 0.3]))2×2 Matrix{ComplexF64}: 0.5+1.38778e-17im 0.1-2.77556e-17im -0.2+6.93889e-17im 0.3-4.16334e-17im
Base.Math.asec
—Methodasec(A::AbstractMatrix)
Compute the inverse matrix secant ofA
.
Base.Math.acsc
—Methodacsc(A::AbstractMatrix)
Compute the inverse matrix cosecant ofA
.
Base.Math.acot
—Methodacot(A::AbstractMatrix)
Compute the inverse matrix cotangent ofA
.
Base.acosh
—Methodacosh(A::AbstractMatrix)
Compute the inverse hyperbolic matrix cosine of a square matrixA
. For the theory and logarithmic formulas used to compute this function, see[AH16_4].
Base.asinh
—Methodasinh(A::AbstractMatrix)
Compute the inverse hyperbolic matrix sine of a square matrixA
. For the theory and logarithmic formulas used to compute this function, see[AH16_5].
Base.atanh
—Methodatanh(A::AbstractMatrix)
Compute the inverse hyperbolic matrix tangent of a square matrixA
. For the theory and logarithmic formulas used to compute this function, see[AH16_6].
Base.Math.asech
—Methodasech(A::AbstractMatrix)
Compute the inverse matrix hyperbolic secant ofA
.
Base.Math.acsch
—Methodacsch(A::AbstractMatrix)
Compute the inverse matrix hyperbolic cosecant ofA
.
Base.Math.acoth
—Methodacoth(A::AbstractMatrix)
Compute the inverse matrix hyperbolic cotangent ofA
.
LinearAlgebra.lyap
—Functionlyap(A, C)
Computes the solutionX
to the continuous Lyapunov equationAX + XA' + C = 0
, where no eigenvalue ofA
has a zero real part and no two eigenvalues are negative complex conjugates of each other.
Examples
julia> A = [3. 4.; 5. 6]2×2 Matrix{Float64}: 3.0 4.0 5.0 6.0julia> B = [1. 1.; 1. 2.]2×2 Matrix{Float64}: 1.0 1.0 1.0 2.0julia> X = lyap(A, B)2×2 Matrix{Float64}: 0.5 -0.5 -0.5 0.25julia> A*X + X*A' ≈ -Btrue
LinearAlgebra.sylvester
—Functionsylvester(A, B, C)
Computes the solutionX
to the Sylvester equationAX + XB + C = 0
, whereA
,B
andC
have compatible dimensions andA
and-B
have no eigenvalues with equal real part.
Examples
julia> A = [3. 4.; 5. 6]2×2 Matrix{Float64}: 3.0 4.0 5.0 6.0julia> B = [1. 1.; 1. 2.]2×2 Matrix{Float64}: 1.0 1.0 1.0 2.0julia> C = [1. 2.; -2. 1]2×2 Matrix{Float64}: 1.0 2.0 -2.0 1.0julia> X = sylvester(A, B, C)2×2 Matrix{Float64}: -4.46667 1.93333 3.73333 -1.8julia> A*X + X*B ≈ -Ctrue
LinearAlgebra.issuccess
—Functionissuccess(F::Factorization)
Test that a factorization of a matrix succeeded.
issuccess(::CholeskyPivoted)
requires Julia 1.6 or later.
Examples
julia> F = cholesky([1 0; 0 1]);julia> issuccess(F)true
issuccess(F::LU; allowsingular = false)
Test that the LU factorization of a matrix succeeded. By default a factorization that produces a valid but rank-deficient U factor is considered a failure. This can be changed by passingallowsingular = true
.
Theallowsingular
keyword argument was added in Julia 1.11.
Examples
julia> F = lu([1 2; 1 2], check = false);julia> issuccess(F)falsejulia> issuccess(F, allowsingular = true)true
LinearAlgebra.issymmetric
—Functionissymmetric(A) -> Bool
Test whether a matrix is symmetric.
Examples
julia> a = [1 2; 2 -1]2×2 Matrix{Int64}: 1 2 2 -1julia> issymmetric(a)truejulia> b = [1 im; -im 1]2×2 Matrix{Complex{Int64}}: 1+0im 0+1im 0-1im 1+0imjulia> issymmetric(b)false
LinearAlgebra.isposdef
—FunctionLinearAlgebra.isposdef!
—Functionisposdef!(A) -> Bool
Test whether a matrix is positive definite (and Hermitian) by trying to perform a Cholesky factorization ofA
, overwritingA
in the process. See alsoisposdef
.
Examples
julia> A = [1. 2.; 2. 50.];julia> isposdef!(A)truejulia> A2×2 Matrix{Float64}: 1.0 2.0 2.0 6.78233
LinearAlgebra.istril
—Functionistril(A::AbstractMatrix, k::Integer = 0) -> Bool
Test whetherA
is lower triangular starting from thek
th superdiagonal.
Examples
julia> a = [1 2; 2 -1]2×2 Matrix{Int64}: 1 2 2 -1julia> istril(a)falsejulia> istril(a, 1)truejulia> c = [1 1 0; 1 1 1; 1 1 1]3×3 Matrix{Int64}: 1 1 0 1 1 1 1 1 1julia> istril(c)falsejulia> istril(c, 1)true
LinearAlgebra.istriu
—Functionistriu(A::AbstractMatrix, k::Integer = 0) -> Bool
Test whetherA
is upper triangular starting from thek
th superdiagonal.
Examples
julia> a = [1 2; 2 -1]2×2 Matrix{Int64}: 1 2 2 -1julia> istriu(a)falsejulia> istriu(a, -1)truejulia> c = [1 1 1; 1 1 1; 0 1 1]3×3 Matrix{Int64}: 1 1 1 1 1 1 0 1 1julia> istriu(c)falsejulia> istriu(c, -1)true
LinearAlgebra.isdiag
—Functionisdiag(A) -> Bool
Test whether a matrix is diagonal in the sense thatiszero(A[i,j])
is true unlessi == j
. Note that it is not necessary forA
to be square; if you would also like to check that, you need to check thatsize(A, 1) == size(A, 2)
.
Examples
julia> a = [1 2; 2 -1]2×2 Matrix{Int64}: 1 2 2 -1julia> isdiag(a)falsejulia> b = [im 0; 0 -im]2×2 Matrix{Complex{Int64}}: 0+1im 0+0im 0+0im 0-1imjulia> isdiag(b)truejulia> c = [1 0 0; 0 2 0]2×3 Matrix{Int64}: 1 0 0 0 2 0julia> isdiag(c)truejulia> d = [1 0 0; 0 2 3]2×3 Matrix{Int64}: 1 0 0 0 2 3julia> isdiag(d)false
LinearAlgebra.ishermitian
—Functionishermitian(A) -> Bool
Test whether a matrix is Hermitian.
Examples
julia> a = [1 2; 2 -1]2×2 Matrix{Int64}: 1 2 2 -1julia> ishermitian(a)truejulia> b = [1 im; -im 1]2×2 Matrix{Complex{Int64}}: 1+0im 0+1im 0-1im 1+0imjulia> ishermitian(b)true
Base.transpose
—Functiontranspose(A)
Lazy transpose. Mutating the returned object should appropriately mutateA
. Often, but not always, yieldsTranspose(A)
, whereTranspose
is a lazy transpose wrapper. Note that this operation is recursive.
This operation is intended for linear algebra usage - for general data manipulation seepermutedims
, which is non-recursive.
Examples
julia> A = [3 2; 0 0]2×2 Matrix{Int64}: 3 2 0 0julia> B = transpose(A)2×2 transpose(::Matrix{Int64}) with eltype Int64: 3 0 2 0julia> B isa Transposetruejulia> transpose(B) === A # the transpose of a transpose unwraps the parenttruejulia> Transpose(B) # however, the constructor always wraps its argument2×2 transpose(transpose(::Matrix{Int64})) with eltype Int64: 3 2 0 0julia> B[1,2] = 4; # modifying B will modify A automaticallyjulia> A2×2 Matrix{Int64}: 3 2 4 0
For complex matrices, theadjoint
operation is equivalent to a conjugate-transpose.
julia> A = reshape([Complex(x, x) for x in 1:4], 2, 2)2×2 Matrix{Complex{Int64}}: 1+1im 3+3im 2+2im 4+4imjulia> adjoint(A) == conj(transpose(A))true
Thetranspose
of anAbstractVector
is a row-vector:
julia> v = [1,2,3]3-element Vector{Int64}: 1 2 3julia> transpose(v) # returns a row-vector1×3 transpose(::Vector{Int64}) with eltype Int64: 1 2 3julia> transpose(v) * v # compute the dot product14
For a matrix of matrices, the individual blocks are recursively operated on:
julia> C = [1 3; 2 4]2×2 Matrix{Int64}: 1 3 2 4julia> D = reshape([C, 2C, 3C, 4C], 2, 2) # construct a block matrix2×2 Matrix{Matrix{Int64}}: [1 3; 2 4] [3 9; 6 12] [2 6; 4 8] [4 12; 8 16]julia> transpose(D) # blocks are recursively transposed2×2 transpose(::Matrix{Matrix{Int64}}) with eltype Transpose{Int64, Matrix{Int64}}: [1 2; 3 4] [2 4; 6 8] [3 6; 9 12] [4 8; 12 16]
transpose(F::Factorization)
Lazy transpose of the factorizationF
. By default, returns aTransposeFactorization
, except forFactorization
s with realeltype
, in which case returns anAdjointFactorization
.
LinearAlgebra.transpose!
—Functiontranspose!(X::AbstractSparseMatrixCSC{Tv,Ti}, A::AbstractSparseMatrixCSC{Tv,Ti}) where {Tv,Ti}
Transpose the matrixA
and stores it in the matrixX
.size(X)
must be equal tosize(transpose(A))
. No additional memory is allocated other than resizing the rowval and nzval ofX
, if needed.
Seehalfperm!
transpose!(dest,src)
Transpose arraysrc
and store the result in the preallocated arraydest
, which should have a size corresponding to(size(src,2),size(src,1))
. No in-place transposition is supported and unexpected results will happen ifsrc
anddest
have overlapping memory regions.
Examples
julia> A = [3+2im 9+2im; 8+7im 4+6im]2×2 Matrix{Complex{Int64}}: 3+2im 9+2im 8+7im 4+6imjulia> B = zeros(Complex{Int64}, 2, 2)2×2 Matrix{Complex{Int64}}: 0+0im 0+0im 0+0im 0+0imjulia> transpose!(B, A);julia> B2×2 Matrix{Complex{Int64}}: 3+2im 8+7im 9+2im 4+6imjulia> A2×2 Matrix{Complex{Int64}}: 3+2im 9+2im 8+7im 4+6im
LinearAlgebra.Transpose
—TypeTranspose
Lazy wrapper type for a transpose view of the underlying linear algebra object, usually anAbstractVector
/AbstractMatrix
. Usually, theTranspose
constructor should not be called directly, usetranspose
instead. To materialize the view usecopy
.
This type is intended for linear algebra usage - for general data manipulation seepermutedims
.
Examples
julia> A = [2 3; 0 0]2×2 Matrix{Int64}: 2 3 0 0julia> Transpose(A)2×2 transpose(::Matrix{Int64}) with eltype Int64: 2 0 3 0
LinearAlgebra.TransposeFactorization
—TypeTransposeFactorization
Lazy wrapper type for the transpose of the underlyingFactorization
object. Usually, theTransposeFactorization
constructor should not be called directly, usetranspose(:: Factorization)
instead.
Base.adjoint
—FunctionA'adjoint(A)
Lazy adjoint (conjugate transposition). Note thatadjoint
is applied recursively to elements.
For number types,adjoint
returns the complex conjugate, and therefore it is equivalent to the identity function for real numbers.
This operation is intended for linear algebra usage - for general data manipulation seepermutedims
.
Examples
julia> A = [3+2im 9+2im; 0 0]2×2 Matrix{Complex{Int64}}: 3+2im 9+2im 0+0im 0+0imjulia> B = A' # equivalently adjoint(A)2×2 adjoint(::Matrix{Complex{Int64}}) with eltype Complex{Int64}: 3-2im 0+0im 9-2im 0+0imjulia> B isa Adjointtruejulia> adjoint(B) === A # the adjoint of an adjoint unwraps the parenttruejulia> Adjoint(B) # however, the constructor always wraps its argument2×2 adjoint(adjoint(::Matrix{Complex{Int64}})) with eltype Complex{Int64}: 3+2im 9+2im 0+0im 0+0imjulia> B[1,2] = 4 + 5im; # modifying B will modify A automaticallyjulia> A2×2 Matrix{Complex{Int64}}: 3+2im 9+2im 4-5im 0+0im
For real matrices, theadjoint
operation is equivalent to atranspose
.
julia> A = reshape([x for x in 1:4], 2, 2)2×2 Matrix{Int64}: 1 3 2 4julia> A'2×2 adjoint(::Matrix{Int64}) with eltype Int64: 1 2 3 4julia> adjoint(A) == transpose(A)true
The adjoint of anAbstractVector
is a row-vector:
julia> x = [3, 4im]2-element Vector{Complex{Int64}}: 3 + 0im 0 + 4imjulia> x'1×2 adjoint(::Vector{Complex{Int64}}) with eltype Complex{Int64}: 3+0im 0-4imjulia> x'x # compute the dot product, equivalently x' * x25 + 0im
For a matrix of matrices, the individual blocks are recursively operated on:
julia> A = reshape([x + im*x for x in 1:4], 2, 2)2×2 Matrix{Complex{Int64}}: 1+1im 3+3im 2+2im 4+4imjulia> C = reshape([A, 2A, 3A, 4A], 2, 2)2×2 Matrix{Matrix{Complex{Int64}}}: [1+1im 3+3im; 2+2im 4+4im] [3+3im 9+9im; 6+6im 12+12im] [2+2im 6+6im; 4+4im 8+8im] [4+4im 12+12im; 8+8im 16+16im]julia> C'2×2 adjoint(::Matrix{Matrix{Complex{Int64}}}) with eltype Adjoint{Complex{Int64}, Matrix{Complex{Int64}}}: [1-1im 2-2im; 3-3im 4-4im] [2-2im 4-4im; 6-6im 8-8im] [3-3im 6-6im; 9-9im 12-12im] [4-4im 8-8im; 12-12im 16-16im]
adjoint(F::Factorization)
Lazy adjoint of the factorizationF
. By default, returns anAdjointFactorization
wrapper.
LinearAlgebra.adjoint!
—Functionadjoint!(X::AbstractSparseMatrixCSC{Tv,Ti}, A::AbstractSparseMatrixCSC{Tv,Ti}) where {Tv,Ti}
Transpose the matrixA
and stores the adjoint of the elements in the matrixX
.size(X)
must be equal tosize(transpose(A))
. No additional memory is allocated other than resizing the rowval and nzval ofX
, if needed.
Seehalfperm!
adjoint!(dest,src)
Conjugate transpose arraysrc
and store the result in the preallocated arraydest
, which should have a size corresponding to(size(src,2),size(src,1))
. No in-place transposition is supported and unexpected results will happen ifsrc
anddest
have overlapping memory regions.
Examples
julia> A = [3+2im 9+2im; 8+7im 4+6im]2×2 Matrix{Complex{Int64}}: 3+2im 9+2im 8+7im 4+6imjulia> B = zeros(Complex{Int64}, 2, 2)2×2 Matrix{Complex{Int64}}: 0+0im 0+0im 0+0im 0+0imjulia> adjoint!(B, A);julia> B2×2 Matrix{Complex{Int64}}: 3-2im 8-7im 9-2im 4-6imjulia> A2×2 Matrix{Complex{Int64}}: 3+2im 9+2im 8+7im 4+6im
LinearAlgebra.Adjoint
—TypeAdjoint
Lazy wrapper type for an adjoint view of the underlying linear algebra object, usually anAbstractVector
/AbstractMatrix
. Usually, theAdjoint
constructor should not be called directly, useadjoint
instead. To materialize the view usecopy
.
This type is intended for linear algebra usage - for general data manipulation seepermutedims
.
Examples
julia> A = [3+2im 9+2im; 0 0]2×2 Matrix{Complex{Int64}}: 3+2im 9+2im 0+0im 0+0imjulia> Adjoint(A)2×2 adjoint(::Matrix{Complex{Int64}}) with eltype Complex{Int64}: 3-2im 0+0im 9-2im 0+0im
LinearAlgebra.AdjointFactorization
—TypeAdjointFactorization
Lazy wrapper type for the adjoint of the underlyingFactorization
object. Usually, theAdjointFactorization
constructor should not be called directly, useadjoint(:: Factorization)
instead.
Base.copy
—Methodcopy(A::Transpose)copy(A::Adjoint)
Eagerly evaluate the lazy matrix transpose/adjoint. Note that the transposition is applied recursively to elements.
This operation is intended for linear algebra usage - for general data manipulation seepermutedims
, which is non-recursive.
Examples
julia> A = [1 2im; -3im 4]2×2 Matrix{Complex{Int64}}: 1+0im 0+2im 0-3im 4+0imjulia> T = transpose(A)2×2 transpose(::Matrix{Complex{Int64}}) with eltype Complex{Int64}: 1+0im 0-3im 0+2im 4+0imjulia> copy(T)2×2 Matrix{Complex{Int64}}: 1+0im 0-3im 0+2im 4+0im
LinearAlgebra.stride1
—Functionstride1(A) -> Int
Return the distance between successive array elements in dimension 1 in units of element size.
Examples
julia> A = [1,2,3,4]4-element Vector{Int64}: 1 2 3 4julia> LinearAlgebra.stride1(A)1julia> B = view(A, 2:2:4)2-element view(::Vector{Int64}, 2:2:4) with eltype Int64: 2 4julia> LinearAlgebra.stride1(B)2
LinearAlgebra.checksquare
—FunctionLinearAlgebra.checksquare(A)
Check that a matrix is square, then return its common dimension. For multiple arguments, return a vector.
Examples
julia> A = fill(1, (4,4)); B = fill(1, (5,5));julia> LinearAlgebra.checksquare(A, B)2-element Vector{Int64}: 4 5
LinearAlgebra.peakflops
—FunctionLinearAlgebra.peakflops(n::Integer=4096; eltype::DataType=Float64, ntrials::Integer=3, parallel::Bool=false)
peakflops
computes the peak flop rate of the computer by using double precisiongemm!
. By default, if no arguments are specified, it multiplies twoFloat64
matrices of sizen x n
, wheren = 4096
. If the underlying BLAS is using multiple threads, higher flop rates are realized. The number of BLAS threads can be set withBLAS.set_num_threads(n)
.
If the keyword argumenteltype
is provided,peakflops
will construct matrices with elements of typeeltype
for calculating the peak flop rate.
By default,peakflops
will use the best timing from 3 trials. If thentrials
keyword argument is provided,peakflops
will use those many trials for picking the best timing.
If the keyword argumentparallel
is set totrue
,peakflops
is run in parallel on all the worker processors. The flop rate of the entire parallel computer is returned. When running in parallel, only 1 BLAS thread is used. The argumentn
still refers to the size of the problem that is solved on each processor.
This function requires at least Julia 1.1. In Julia 1.0 it is available from the standard libraryInteractiveUtils
.
LinearAlgebra.hermitianpart
—Functionhermitianpart(A::AbstractMatrix, uplo::Symbol=:U) -> Hermitian
Return the Hermitian part of the square matrixA
, defined as(A + A') / 2
, as aHermitian
matrix. For real matricesA
, this is also known as the symmetric part ofA
; it is also sometimes called the "operator real part". The optional argumentuplo
controls the corresponding argument of theHermitian
view. For real matrices, the latter is equivalent to aSymmetric
view.
See alsohermitianpart!
for the corresponding in-place operation.
This function requires Julia 1.10 or later.
LinearAlgebra.hermitianpart!
—Functionhermitianpart!(A::AbstractMatrix, uplo::Symbol=:U) -> Hermitian
Overwrite the square matrixA
in-place with its Hermitian part(A + A') / 2
, and returnHermitian(A, uplo)
. For real matricesA
, this is also known as the symmetric part ofA
.
See alsohermitianpart
for the corresponding out-of-place operation.
This function requires Julia 1.10 or later.
LinearAlgebra.copy_adjoint!
—Functioncopy_adjoint!(B::AbstractVecOrMat, ir_dest::AbstractRange{Int}, jr_dest::AbstractRange{Int}, A::AbstractVecOrMat, ir_src::AbstractRange{Int}, jr_src::AbstractRange{Int}) -> B
Efficiently copy elements of matrixA
toB
with adjunction as follows:
B[ir_dest, jr_dest] = adjoint(A)[jr_src, ir_src]
The elementsB[ir_dest, jr_dest]
are overwritten. Furthermore, the index range parameters must satisfylength(ir_dest) == length(jr_src)
andlength(jr_dest) == length(ir_src)
.
LinearAlgebra.copy_transpose!
—Functioncopy_transpose!(B::AbstractVecOrMat, ir_dest::AbstractRange{Int}, jr_dest::AbstractRange{Int}, A::AbstractVecOrMat, ir_src::AbstractRange{Int}, jr_src::AbstractRange{Int}) -> B
Efficiently copy elements of matrixA
toB
with transposition as follows:
B[ir_dest, jr_dest] = transpose(A)[jr_src, ir_src]
The elementsB[ir_dest, jr_dest]
are overwritten. Furthermore, the index range parameters must satisfylength(ir_dest) == length(jr_src)
andlength(jr_dest) == length(ir_src)
.
copy_transpose!(B::AbstractMatrix, ir_dest::AbstractUnitRange, jr_dest::AbstractUnitRange, tM::AbstractChar, M::AbstractVecOrMat, ir_src::AbstractUnitRange, jr_src::AbstractUnitRange) -> B
Efficiently copy elements of matrixM
toB
conditioned on the character parametertM
as follows:
tM | Destination | Source |
---|---|---|
'N' | B[ir_dest, jr_dest] | transpose(M)[jr_src, ir_src] |
'T' | B[ir_dest, jr_dest] | M[jr_src, ir_src] |
'C' | B[ir_dest, jr_dest] | conj(M)[jr_src, ir_src] |
The elementsB[ir_dest, jr_dest]
are overwritten. Furthermore, the index range parameters must satisfylength(ir_dest) == length(jr_src)
andlength(jr_dest) == length(ir_src)
.
See alsocopyto!
andcopy_adjoint!
.
In many cases there are in-place versions of matrix operations that allow you to supply a pre-allocated output vector or matrix. This is useful when optimizing critical code in order to avoid the overhead of repeated allocations. These in-place operations are suffixed with!
below (e.g.mul!
) according to the usual Julia convention.
LinearAlgebra.mul!
—Functionmul!(Y, A, B) -> Y
Calculates the matrix-matrix or matrix-vector product$A B$ and stores the result inY
, overwriting the existing value ofY
. Note thatY
must not be aliased with eitherA
orB
.
Examples
julia> A = [1.0 2.0; 3.0 4.0]; B = [1.0 1.0; 1.0 1.0]; Y = similar(B);julia> mul!(Y, A, B) === Ytruejulia> Y2×2 Matrix{Float64}: 3.0 3.0 7.0 7.0julia> Y == A * Btrue
Implementation
For custom matrix and vector types, it is recommended to implement 5-argumentmul!
rather than implementing 3-argumentmul!
directly if possible.
mul!(C, A, B, α, β) -> C
Combined inplace matrix-matrix or matrix-vector multiply-add$A B α + C β$. The result is stored inC
by overwriting it. Note thatC
must not be aliased with eitherA
orB
.
Five-argumentmul!
requires at least Julia 1.3.
Examples
julia> A = [1.0 2.0; 3.0 4.0]; B = [1.0 1.0; 1.0 1.0]; C = [1.0 2.0; 3.0 4.0];julia> α, β = 100.0, 10.0;julia> mul!(C, A, B, α, β) === Ctruejulia> C2×2 Matrix{Float64}: 310.0 320.0 730.0 740.0julia> C_original = [1.0 2.0; 3.0 4.0]; # A copy of the original value of Cjulia> C == A * B * α + C_original * βtrue
LinearAlgebra.lmul!
—Functionlmul!(a::Number, B::AbstractArray)
Scale an arrayB
by a scalara
overwritingB
in-place. Usermul!
to multiply scalar from right. The scaling operation respects the semantics of the multiplication*
betweena
and an element ofB
. In particular, this also applies to multiplication involving non-finite numbers such asNaN
and±Inf
.
Prior to Julia 1.1,NaN
and±Inf
entries inB
were treated inconsistently.
Examples
julia> B = [1 2; 3 4]2×2 Matrix{Int64}: 1 2 3 4julia> lmul!(2, B)2×2 Matrix{Int64}: 2 4 6 8julia> lmul!(0.0, [Inf])1-element Vector{Float64}: NaN
lmul!(A, B)
Calculate the matrix-matrix product$AB$, overwritingB
, and return the result. Here,A
must be of special matrix type, like, e.g.,Diagonal
,UpperTriangular
orLowerTriangular
, or of some orthogonal type, seeQR
.
Examples
julia> B = [0 1; 1 0];julia> A = UpperTriangular([1 2; 0 3]);julia> lmul!(A, B);julia> B2×2 Matrix{Int64}: 2 1 3 0julia> B = [1.0 2.0; 3.0 4.0];julia> F = qr([0 1; -1 0]);julia> lmul!(F.Q, B)2×2 Matrix{Float64}: 3.0 4.0 1.0 2.0
LinearAlgebra.rmul!
—Functionrmul!(A::AbstractArray, b::Number)
Scale an arrayA
by a scalarb
overwritingA
in-place. Uselmul!
to multiply scalar from left. The scaling operation respects the semantics of the multiplication*
between an element ofA
andb
. In particular, this also applies to multiplication involving non-finite numbers such asNaN
and±Inf
.
Prior to Julia 1.1,NaN
and±Inf
entries inA
were treated inconsistently.
Examples
julia> A = [1 2; 3 4]2×2 Matrix{Int64}: 1 2 3 4julia> rmul!(A, 2)2×2 Matrix{Int64}: 2 4 6 8julia> rmul!([NaN], 0.0)1-element Vector{Float64}: NaN
rmul!(A, B)
Calculate the matrix-matrix product$AB$, overwritingA
, and return the result. Here,B
must be of special matrix type, like, e.g.,Diagonal
,UpperTriangular
orLowerTriangular
, or of some orthogonal type, seeQR
.
Examples
julia> A = [0 1; 1 0];julia> B = UpperTriangular([1 2; 0 3]);julia> rmul!(A, B);julia> A2×2 Matrix{Int64}: 0 3 1 2julia> A = [1.0 2.0; 3.0 4.0];julia> F = qr([0 1; -1 0]);julia> rmul!(A, F.Q)2×2 Matrix{Float64}: 2.0 1.0 4.0 3.0
LinearAlgebra.ldiv!
—Functionldiv!(Y, A, B) -> Y
ComputeA \ B
in-place and store the result inY
, returning the result.
The argumentA
shouldnot be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced byfactorize
orcholesky
). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g.,lu!
), and performance-critical situations requiringldiv!
usually also require fine-grained control over the factorization ofA
.
Certain structured matrix types, such asDiagonal
andUpperTriangular
, are permitted, as these are already in a factorized form
Examples
julia> A = [1 2.2 4; 3.1 0.2 3; 4 1 2];julia> X = [1; 2.5; 3];julia> Y = zero(X);julia> ldiv!(Y, qr(A), X);julia> Y ≈ A\Xtrue
ldiv!(A, B)
ComputeA \ B
in-place and overwritingB
to store the result.
The argumentA
shouldnot be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced byfactorize
orcholesky
). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g.,lu!
), and performance-critical situations requiringldiv!
usually also require fine-grained control over the factorization ofA
.
Certain structured matrix types, such asDiagonal
andUpperTriangular
, are permitted, as these are already in a factorized form
Examples
julia> A = [1 2.2 4; 3.1 0.2 3; 4 1 2];julia> X = [1; 2.5; 3];julia> Y = copy(X);julia> ldiv!(qr(A), X);julia> X ≈ A\Ytrue
ldiv!(a::Number, B::AbstractArray)
Divide each entry in an arrayB
by a scalara
overwritingB
in-place. Userdiv!
to divide scalar from right.
Examples
julia> B = [1.0 2.0; 3.0 4.0]2×2 Matrix{Float64}: 1.0 2.0 3.0 4.0julia> ldiv!(2.0, B)2×2 Matrix{Float64}: 0.5 1.0 1.5 2.0
ldiv!(A::Tridiagonal, B::AbstractVecOrMat) -> B
ComputeA \ B
in-place by Gaussian elimination with partial pivoting and store the result inB
, returning the result. In the process, the diagonals ofA
are overwritten as well.
ldiv!
forTridiagonal
left-hand sides requires at least Julia 1.11.
LinearAlgebra.rdiv!
—Functionrdiv!(A, B)
ComputeA / B
in-place and overwritingA
to store the result.
The argumentB
shouldnot be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced byfactorize
orcholesky
). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g.,lu!
), and performance-critical situations requiringrdiv!
usually also require fine-grained control over the factorization ofB
.
Certain structured matrix types, such asDiagonal
andUpperTriangular
, are permitted, as these are already in a factorized form
rdiv!(A::AbstractArray, b::Number)
Divide each entry in an arrayA
by a scalarb
overwritingA
in-place. Useldiv!
to divide scalar from left.
Examples
julia> A = [1.0 2.0; 3.0 4.0]2×2 Matrix{Float64}: 1.0 2.0 3.0 4.0julia> rdiv!(A, 2.0)2×2 Matrix{Float64}: 0.5 1.0 1.5 2.0
In Julia (as in much of scientific computation), dense linear-algebra operations are based on theLAPACK library, which in turn is built on top of basic linear-algebra building-blocks known as theBLAS. There are highly optimized implementations of BLAS available for every computer architecture, and sometimes in high-performance linear algebra routines it is useful to call the BLAS functions directly.
LinearAlgebra.BLAS
provides wrappers for some of the BLAS functions. Those BLAS functions that overwrite one of the input arrays have names ending in'!'
. Usually, a BLAS function has four methods defined, forFloat32
,Float64
,ComplexF32
, andComplexF64
arrays.
Many BLAS functions accept arguments that determine whether to transpose an argument (trans
), which triangle of a matrix to reference (uplo
orul
), whether the diagonal of a triangular matrix can be assumed to be all ones (dA
) or which side of a matrix multiplication the input argument belongs on (side
). The possibilities are:
side | Meaning |
---|---|
'L' | The argument goes on theleft side of a matrix-matrix operation. |
'R' | The argument goes on theright side of a matrix-matrix operation. |
uplo /ul | Meaning |
---|---|
'U' | Only theupper triangle of the matrix will be used. |
'L' | Only thelower triangle of the matrix will be used. |
trans /tX | Meaning |
---|---|
'N' | The input matrixX is not transposed or conjugated. |
'T' | The input matrixX will be transposed. |
'C' | The input matrixX will be conjugated and transposed. |
diag /dX | Meaning |
---|---|
'N' | The diagonal values of the matrixX will be read. |
'U' | The diagonal of the matrixX is assumed to be all ones. |
LinearAlgebra.BLAS
—ModuleInterface to BLAS subroutines.
LinearAlgebra.BLAS.set_num_threads
—Functionset_num_threads(n::Integer)set_num_threads(::Nothing)
Set the number of threads the BLAS library should use equal ton::Integer
.
Also acceptsnothing
, in which case julia tries to guess the default number of threads. Passingnothing
is discouraged and mainly exists for historical reasons.
LinearAlgebra.BLAS.get_num_threads
—Functionget_num_threads()
Get the number of threads the BLAS library is using.
get_num_threads
requires at least Julia 1.6.
BLAS functions can be divided into three groups, also called three levels, depending on when they were first proposed, the type of input parameters, and the complexity of the operation.
The level 1 BLAS functions were first proposed in [(Lawson, 1979)][Lawson-1979] and define operations between scalars and vectors.
[Lawson-1979]: https://dl.acm.org/doi/10.1145/355841.355847
LinearAlgebra.BLAS.rot!
—Functionrot!(n, X, incx, Y, incy, c, s)
OverwriteX
withc*X + s*Y
andY
with-conj(s)*X + c*Y
for the firstn
elements of arrayX
with strideincx
and firstn
elements of arrayY
with strideincy
. ReturnsX
andY
.
rot!
requires at least Julia 1.5.
LinearAlgebra.BLAS.scal!
—Functionscal!(n, a, X, incx)scal!(a, X)
OverwriteX
witha*X
for the firstn
elements of arrayX
with strideincx
. ReturnsX
.
Ifn
andincx
are not provided,length(X)
andstride(X,1)
are used.
LinearAlgebra.BLAS.scal
—Functionscal(n, a, X, incx)scal(a, X)
ReturnX
scaled bya
for the firstn
elements of arrayX
with strideincx
.
Ifn
andincx
are not provided,length(X)
andstride(X,1)
are used.
LinearAlgebra.BLAS.blascopy!
—Functionblascopy!(n, X, incx, Y, incy)
Copyn
elements of arrayX
with strideincx
to arrayY
with strideincy
. ReturnsY
.
LinearAlgebra.BLAS.dot
—Functiondot(n, X, incx, Y, incy)
Dot product of two vectors consisting ofn
elements of arrayX
with strideincx
andn
elements of arrayY
with strideincy
.
Examples
julia> BLAS.dot(10, fill(1.0, 10), 1, fill(1.0, 20), 2)10.0
LinearAlgebra.BLAS.dotu
—Functiondotu(n, X, incx, Y, incy)
Dot function for two complex vectors consisting ofn
elements of arrayX
with strideincx
andn
elements of arrayY
with strideincy
.
Examples
julia> BLAS.dotu(10, fill(1.0im, 10), 1, fill(1.0+im, 20), 2)-10.0 + 10.0im
LinearAlgebra.BLAS.dotc
—Functiondotc(n, X, incx, U, incy)
Dot function for two complex vectors, consisting ofn
elements of arrayX
with strideincx
andn
elements of arrayU
with strideincy
, conjugating the first vector.
Examples
julia> BLAS.dotc(10, fill(1.0im, 10), 1, fill(1.0+im, 20), 2)10.0 - 10.0im
LinearAlgebra.BLAS.nrm2
—Functionnrm2(n, X, incx)
2-norm of a vector consisting ofn
elements of arrayX
with strideincx
.
Examples
julia> BLAS.nrm2(4, fill(1.0, 8), 2)2.0julia> BLAS.nrm2(1, fill(1.0, 8), 2)1.0
LinearAlgebra.BLAS.asum
—Functionasum(n, X, incx)
Sum of the magnitudes of the firstn
elements of arrayX
with strideincx
.
For a real array, the magnitude is the absolute value. For a complex array, the magnitude is the sum of the absolute value of the real part and the absolute value of the imaginary part.
Examples
julia> BLAS.asum(5, fill(1.0im, 10), 2)5.0julia> BLAS.asum(2, fill(1.0im, 10), 5)2.0
LinearAlgebra.BLAS.iamax
—Functioniamax(n, dx, incx)iamax(dx)
Find the index of the element ofdx
with the maximum absolute value.n
is the length ofdx
, andincx
is the stride. Ifn
andincx
are not provided, they assume default values ofn=length(dx)
andincx=stride1(dx)
.
The level 2 BLAS functions were published in [(Dongarra, 1988)][Dongarra-1988], and define matrix-vector operations.
[Dongarra-1988]: https://dl.acm.org/doi/10.1145/42288.42291
return a vector
LinearAlgebra.BLAS.gemv!
—Functiongemv!(tA, alpha, A, x, beta, y)
Update the vectory
asalpha*A*x + beta*y
oralpha*A'x + beta*y
according totA
.alpha
andbeta
are scalars. Return the updatedy
.
LinearAlgebra.BLAS.gemv
—Methodgemv(tA, alpha, A, x)
Returnalpha*A*x
oralpha*A'x
according totA
.alpha
is a scalar.
LinearAlgebra.BLAS.gemv
—Methodgemv(tA, A, x)
ReturnA*x
orA'x
according totA
.
LinearAlgebra.BLAS.gbmv!
—Functiongbmv!(trans, m, kl, ku, alpha, A, x, beta, y)
Update vectory
asalpha*A*x + beta*y
oralpha*A'*x + beta*y
according totrans
. The matrixA
is a general band matrix of dimensionm
bysize(A,2)
withkl
sub-diagonals andku
super-diagonals.alpha
andbeta
are scalars. Return the updatedy
.
LinearAlgebra.BLAS.gbmv
—Functiongbmv(trans, m, kl, ku, alpha, A, x)
Returnalpha*A*x
oralpha*A'*x
according totrans
. The matrixA
is a general band matrix of dimensionm
bysize(A,2)
withkl
sub-diagonals andku
super-diagonals, andalpha
is a scalar.
LinearAlgebra.BLAS.hemv!
—Functionhemv!(ul, alpha, A, x, beta, y)
Update the vectory
asalpha*A*x + beta*y
.A
is assumed to be Hermitian. Only theul
triangle ofA
is used.alpha
andbeta
are scalars. Return the updatedy
.
LinearAlgebra.BLAS.hemv
—Methodhemv(ul, alpha, A, x)
Returnalpha*A*x
.A
is assumed to be Hermitian. Only theul
triangle ofA
is used.alpha
is a scalar.
LinearAlgebra.BLAS.hemv
—Methodhemv(ul, A, x)
ReturnA*x
.A
is assumed to be Hermitian. Only theul
triangle ofA
is used.
LinearAlgebra.BLAS.hpmv!
—Functionhpmv!(uplo, α, AP, x, β, y)
Update vectory
asα*A*x + β*y
, whereA
is a Hermitian matrix provided in packed formatAP
.
Withuplo = 'U'
, the array AP must contain the upper triangular part of the Hermitian matrix packed sequentially, column by column, so thatAP[1]
containsA[1, 1]
,AP[2]
andAP[3]
containA[1, 2]
andA[2, 2]
respectively, and so on.
Withuplo = 'L'
, the array AP must contain the lower triangular part of the Hermitian matrix packed sequentially, column by column, so thatAP[1]
containsA[1, 1]
,AP[2]
andAP[3]
containA[2, 1]
andA[3, 1]
respectively, and so on.
The scalar inputsα
andβ
must be complex or real numbers.
The array inputsx
,y
andAP
must all be ofComplexF32
orComplexF64
type.
Return the updatedy
.
hpmv!
requires at least Julia 1.5.
LinearAlgebra.BLAS.symv!
—Functionsymv!(ul, alpha, A, x, beta, y)
Update the vectory
asalpha*A*x + beta*y
.A
is assumed to be symmetric. Only theul
triangle ofA
is used.alpha
andbeta
are scalars. Return the updatedy
.
LinearAlgebra.BLAS.symv
—Methodsymv(ul, alpha, A, x)
Returnalpha*A*x
.A
is assumed to be symmetric. Only theul
triangle ofA
is used.alpha
is a scalar.
LinearAlgebra.BLAS.symv
—Methodsymv(ul, A, x)
ReturnA*x
.A
is assumed to be symmetric. Only theul
triangle ofA
is used.
LinearAlgebra.BLAS.sbmv!
—Functionsbmv!(uplo, k, alpha, A, x, beta, y)
Update vectory
asalpha*A*x + beta*y
whereA
is a symmetric band matrix of ordersize(A,2)
withk
super-diagonals stored in the argumentA
. The storage layout forA
is described the reference BLAS module, level-2 BLAS athttps://www.netlib.org/lapack/explore-html/. Only theuplo
triangle ofA
is used.
Return the updatedy
.
LinearAlgebra.BLAS.sbmv
—Methodsbmv(uplo, k, alpha, A, x)
Returnalpha*A*x
whereA
is a symmetric band matrix of ordersize(A,2)
withk
super-diagonals stored in the argumentA
. Only theuplo
triangle ofA
is used.
LinearAlgebra.BLAS.sbmv
—Methodsbmv(uplo, k, A, x)
ReturnA*x
whereA
is a symmetric band matrix of ordersize(A,2)
withk
super-diagonals stored in the argumentA
. Only theuplo
triangle ofA
is used.
LinearAlgebra.BLAS.spmv!
—Functionspmv!(uplo, α, AP, x, β, y)
Update vectory
asα*A*x + β*y
, whereA
is a symmetric matrix provided in packed formatAP
.
Withuplo = 'U'
, the array AP must contain the upper triangular part of the symmetric matrix packed sequentially, column by column, so thatAP[1]
containsA[1, 1]
,AP[2]
andAP[3]
containA[1, 2]
andA[2, 2]
respectively, and so on.
Withuplo = 'L'
, the array AP must contain the lower triangular part of the symmetric matrix packed sequentially, column by column, so thatAP[1]
containsA[1, 1]
,AP[2]
andAP[3]
containA[2, 1]
andA[3, 1]
respectively, and so on.
The scalar inputsα
andβ
must be real.
The array inputsx
,y
andAP
must all be ofFloat32
orFloat64
type.
Return the updatedy
.
spmv!
requires at least Julia 1.5.
LinearAlgebra.BLAS.trmv!
—FunctionLinearAlgebra.BLAS.trmv
—FunctionLinearAlgebra.BLAS.trsv!
—FunctionLinearAlgebra.BLAS.trsv
—Functionreturn a matrix
LinearAlgebra.BLAS.ger!
—Functionger!(alpha, x, y, A)
Rank-1 update of the matrixA
with vectorsx
andy
asalpha*x*y' + A
.
LinearAlgebra.BLAS.her!
—Functionher!(uplo, alpha, x, A)
Methods for complex arrays only. Rank-1 update of the Hermitian matrixA
with vectorx
asalpha*x*x' + A
.uplo
controls which triangle ofA
is updated. ReturnsA
.
LinearAlgebra.BLAS.syr!
—Functionsyr!(uplo, alpha, x, A)
Rank-1 update of the symmetric matrixA
with vectorx
asalpha*x*transpose(x) + A
.uplo
controls which triangle ofA
is updated. ReturnsA
.
LinearAlgebra.BLAS.spr!
—Functionspr!(uplo, α, x, AP)
Update matrixA
asA+α*x*x'
, whereA
is a symmetric matrix provided in packed formatAP
andx
is a vector.
Withuplo = 'U'
, the array AP must contain the upper triangular part of the symmetric matrix packed sequentially, column by column, so thatAP[1]
containsA[1, 1]
,AP[2]
andAP[3]
containA[1, 2]
andA[2, 2]
respectively, and so on.
Withuplo = 'L'
, the array AP must contain the lower triangular part of the symmetric matrix packed sequentially, column by column, so thatAP[1]
containsA[1, 1]
,AP[2]
andAP[3]
containA[2, 1]
andA[3, 1]
respectively, and so on.
The scalar inputα
must be real.
The array inputsx
andAP
must all be ofFloat32
orFloat64
type. Return the updatedAP
.
spr!
requires at least Julia 1.8.
The level 3 BLAS functions were published in [(Dongarra, 1990)][Dongarra-1990], and define matrix-matrix operations.
[Dongarra-1990]: https://dl.acm.org/doi/10.1145/77626.79170
LinearAlgebra.BLAS.gemmt!
—FunctionLinearAlgebra.BLAS.gemmt
—MethodLinearAlgebra.BLAS.gemmt
—MethodLinearAlgebra.BLAS.gemm!
—Functiongemm!(tA, tB, alpha, A, B, beta, C)
UpdateC
asalpha*A*B + beta*C
or the other three variants according totA
andtB
. Return the updatedC
.
LinearAlgebra.BLAS.gemm
—Methodgemm(tA, tB, alpha, A, B)
Returnalpha*A*B
or the other three variants according totA
andtB
.
LinearAlgebra.BLAS.gemm
—Methodgemm(tA, tB, A, B)
ReturnA*B
or the other three variants according totA
andtB
.
LinearAlgebra.BLAS.symm!
—FunctionLinearAlgebra.BLAS.symm
—MethodLinearAlgebra.BLAS.symm
—MethodLinearAlgebra.BLAS.hemm!
—FunctionLinearAlgebra.BLAS.hemm
—MethodLinearAlgebra.BLAS.hemm
—MethodLinearAlgebra.BLAS.syrk!
—FunctionLinearAlgebra.BLAS.syrk
—FunctionLinearAlgebra.BLAS.herk!
—FunctionLinearAlgebra.BLAS.herk
—FunctionLinearAlgebra.BLAS.syr2k!
—FunctionLinearAlgebra.BLAS.syr2k
—FunctionLinearAlgebra.BLAS.her2k!
—FunctionLinearAlgebra.BLAS.her2k
—FunctionLinearAlgebra.BLAS.trmm!
—FunctionLinearAlgebra.BLAS.trmm
—FunctionLinearAlgebra.BLAS.trsm!
—FunctionLinearAlgebra.BLAS.trsm
—FunctionLinearAlgebra.LAPACK
provides wrappers for some of the LAPACK functions for linear algebra. Those functions that overwrite one of the input arrays have names ending in'!'
.
Usually a function has 4 methods defined, one each forFloat64
,Float32
,ComplexF64
andComplexF32
arrays.
Note that the LAPACK API provided by Julia can and will change in the future. Since this API is not user-facing, there is no commitment to support/deprecate this specific set of functions in future releases.
LinearAlgebra.LAPACK
—ModuleInterfaces to LAPACK subroutines.
LinearAlgebra.LAPACK.gbtrf!
—Functiongbtrf!(kl, ku, m, AB) -> (AB, ipiv)
Compute the LU factorization of a banded matrixAB
.kl
is the first subdiagonal containing a nonzero band,ku
is the last superdiagonal containing one, andm
is the first dimension of the matrixAB
. Returns the LU factorization in-place andipiv
, the vector of pivots used.
LinearAlgebra.LAPACK.gbtrs!
—Functiongbtrs!(trans, kl, ku, m, AB, ipiv, B)
Solve the equationAB * X = B
.trans
determines the orientation ofAB
. It may beN
(no transpose),T
(transpose), orC
(conjugate transpose).kl
is the first subdiagonal containing a nonzero band,ku
is the last superdiagonal containing one, andm
is the first dimension of the matrixAB
.ipiv
is the vector of pivots returned fromgbtrf!
. Returns the vector or matrixX
, overwritingB
in-place.
LinearAlgebra.LAPACK.gebal!
—Functiongebal!(job, A) -> (ilo, ihi, scale)
Balance the matrixA
before computing its eigensystem or Schur factorization.job
can be one ofN
(A
will not be permuted or scaled),P
(A
will only be permuted),S
(A
will only be scaled), orB
(A
will be both permuted and scaled). ModifiesA
in-place and returnsilo
,ihi
, andscale
. If permuting was turned on,A[i,j] = 0
ifj > i
and1 < j < ilo
orj > ihi
.scale
contains information about the scaling/permutations performed.
LinearAlgebra.LAPACK.gebak!
—Functiongebak!(job, side, ilo, ihi, scale, V)
Transform the eigenvectorsV
of a matrix balanced usinggebal!
to the unscaled/unpermuted eigenvectors of the original matrix. ModifiesV
in-place.side
can beL
(left eigenvectors are transformed) orR
(right eigenvectors are transformed).
LinearAlgebra.LAPACK.gebrd!
—Functiongebrd!(A) -> (A, d, e, tauq, taup)
ReduceA
in-place to bidiagonal formA = QBP'
. ReturnsA
, containing the bidiagonal matrixB
;d
, containing the diagonal elements ofB
;e
, containing the off-diagonal elements ofB
;tauq
, containing the elementary reflectors representingQ
; andtaup
, containing the elementary reflectors representingP
.
LinearAlgebra.LAPACK.gelqf!
—Functiongelqf!(A, tau)
Compute theLQ
factorization ofA
,A = LQ
.tau
contains scalars which parameterize the elementary reflectors of the factorization.tau
must have length greater than or equal to the smallest dimension ofA
.
ReturnsA
andtau
modified in-place.
gelqf!(A) -> (A, tau)
Compute theLQ
factorization ofA
,A = LQ
.
ReturnsA
, modified in-place, andtau
, which contains scalars which parameterize the elementary reflectors of the factorization.
LinearAlgebra.LAPACK.geqlf!
—Functiongeqlf!(A, tau)
Compute theQL
factorization ofA
,A = QL
.tau
contains scalars which parameterize the elementary reflectors of the factorization.tau
must have length greater than or equal to the smallest dimension ofA
.
ReturnsA
andtau
modified in-place.
geqlf!(A) -> (A, tau)
Compute theQL
factorization ofA
,A = QL
.
ReturnsA
, modified in-place, andtau
, which contains scalars which parameterize the elementary reflectors of the factorization.
LinearAlgebra.LAPACK.geqrf!
—Functiongeqrf!(A, tau)
Compute theQR
factorization ofA
,A = QR
.tau
contains scalars which parameterize the elementary reflectors of the factorization.tau
must have length greater than or equal to the smallest dimension ofA
.
ReturnsA
andtau
modified in-place.
geqrf!(A) -> (A, tau)
Compute theQR
factorization ofA
,A = QR
.
ReturnsA
, modified in-place, andtau
, which contains scalars which parameterize the elementary reflectors of the factorization.
LinearAlgebra.LAPACK.geqp3!
—Functiongeqp3!(A, [jpvt, tau]) -> (A, tau, jpvt)
Compute the pivotedQR
factorization ofA
,AP = QR
using BLAS level 3.P
is a pivoting matrix, represented byjpvt
.tau
stores the elementary reflectors. The argumentsjpvt
andtau
are optional and allow for passing preallocated arrays. When passed,jpvt
must have length greater than or equal ton
ifA
is an(m x n)
matrix andtau
must have length greater than or equal to the smallest dimension ofA
. On entry, ifjpvt[j]
does not equal zero then thej
th column ofA
is permuted to the front ofAP
.
A
,jpvt
, andtau
are modified in-place.
LinearAlgebra.LAPACK.gerqf!
—Functiongerqf!(A, tau)
Compute theRQ
factorization ofA
,A = RQ
.tau
contains scalars which parameterize the elementary reflectors of the factorization.tau
must have length greater than or equal to the smallest dimension ofA
.
ReturnsA
andtau
modified in-place.
gerqf!(A) -> (A, tau)
Compute theRQ
factorization ofA
,A = RQ
.
ReturnsA
, modified in-place, andtau
, which contains scalars which parameterize the elementary reflectors of the factorization.
LinearAlgebra.LAPACK.geqrt!
—Functiongeqrt!(A, T)
Compute the blockedQR
factorization ofA
,A = QR
.T
contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension ofT
sets the block size and it must be between 1 andn
. The second dimension ofT
must equal the smallest dimension ofA
.
ReturnsA
andT
modified in-place.
geqrt!(A, nb) -> (A, T)
Compute the blockedQR
factorization ofA
,A = QR
.nb
sets the block size and it must be between 1 andn
, the second dimension ofA
.
ReturnsA
, modified in-place, andT
, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.
LinearAlgebra.LAPACK.geqrt3!
—Functiongeqrt3!(A, T)
Recursively computes the blockedQR
factorization ofA
,A = QR
.T
contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension ofT
sets the block size and it must be between 1 andn
. The second dimension ofT
must equal the smallest dimension ofA
.
ReturnsA
andT
modified in-place.
geqrt3!(A) -> (A, T)
Recursively computes the blockedQR
factorization ofA
,A = QR
.
ReturnsA
, modified in-place, andT
, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.
LinearAlgebra.LAPACK.getrf!
—Functiongetrf!(A, ipiv) -> (A, ipiv, info)
Compute the pivotedLU
factorization ofA
,A = LU
.ipiv
contains the pivoting information andinfo
a code which indicates success (info = 0
), a singular value inU
(info = i
, in which caseU[i,i]
is singular), or an error code (info < 0
).
getrf!(A) -> (A, ipiv, info)
Compute the pivotedLU
factorization ofA
,A = LU
.
ReturnsA
, modified in-place,ipiv
, the pivoting information, and aninfo
code which indicates success (info = 0
), a singular value inU
(info = i
, in which caseU[i,i]
is singular), or an error code (info < 0
).
LinearAlgebra.LAPACK.tzrzf!
—Functiontzrzf!(A) -> (A, tau)
Transforms the upper trapezoidal matrixA
to upper triangular form in-place. ReturnsA
andtau
, the scalar parameters for the elementary reflectors of the transformation.
LinearAlgebra.LAPACK.ormrz!
—Functionormrz!(side, trans, A, tau, C)
Multiplies the matrixC
byQ
from the transformation supplied bytzrzf!
. Depending onside
ortrans
the multiplication can be left-sided (side = L, Q*C
) or right-sided (side = R, C*Q
) andQ
can be unmodified (trans = N
), transposed (trans = T
), or conjugate transposed (trans = C
). Returns matrixC
which is modified in-place with the result of the multiplication.
LinearAlgebra.LAPACK.gels!
—Functiongels!(trans, A, B) -> (F, B, ssr)
Solves the linear equationA * X = B
,transpose(A) * X = B
, oradjoint(A) * X = B
using a QR or LQ factorization. Modifies the matrix/vectorB
in place with the solution.A
is overwritten with itsQR
orLQ
factorization.trans
may be one ofN
(no modification),T
(transpose), orC
(conjugate transpose).gels!
searches for the minimum norm/least squares solution.A
may be under or over determined. The solution is returned inB
.
LinearAlgebra.LAPACK.gesv!
—Functiongesv!(A, B) -> (B, A, ipiv)
Solves the linear equationA * X = B
whereA
is a square matrix using theLU
factorization ofA
.A
is overwritten with itsLU
factorization andB
is overwritten with the solutionX
.ipiv
contains the pivoting information for theLU
factorization ofA
.
LinearAlgebra.LAPACK.getrs!
—Functiongetrs!(trans, A, ipiv, B)
Solves the linear equationA * X = B
,transpose(A) * X = B
, oradjoint(A) * X = B
for squareA
. Modifies the matrix/vectorB
in place with the solution.A
is theLU
factorization fromgetrf!
, withipiv
the pivoting information.trans
may be one ofN
(no modification),T
(transpose), orC
(conjugate transpose).
LinearAlgebra.LAPACK.getri!
—Functiongetri!(A, ipiv)
Computes the inverse ofA
, using itsLU
factorization found bygetrf!
.ipiv
is the pivot information output andA
contains theLU
factorization ofgetrf!
.A
is overwritten with its inverse.
LinearAlgebra.LAPACK.gesvx!
—Functiongesvx!(fact, trans, A, AF, ipiv, equed, R, C, B) -> (X, equed, R, C, B, rcond, ferr, berr, work)
Solves the linear equationA * X = B
(trans = N
),transpose(A) * X = B
(trans = T
), oradjoint(A) * X = B
(trans = C
) using theLU
factorization ofA
.fact
may beE
, in which caseA
will be equilibrated and copied toAF
;F
, in which caseAF
andipiv
from a previousLU
factorization are inputs; orN
, in which caseA
will be copied toAF
and then factored. Iffact = F
,equed
may beN
, meaningA
has not been equilibrated;R
, meaningA
was multiplied byDiagonal(R)
from the left;C
, meaningA
was multiplied byDiagonal(C)
from the right; orB
, meaningA
was multiplied byDiagonal(R)
from the left andDiagonal(C)
from the right. Iffact = F
andequed = R
orB
the elements ofR
must all be positive. Iffact = F
andequed = C
orB
the elements ofC
must all be positive.
Returns the solutionX
;equed
, which is an output iffact
is notN
, and describes the equilibration that was performed;R
, the row equilibration diagonal;C
, the column equilibration diagonal;B
, which may be overwritten with its equilibrated formDiagonal(R)*B
(iftrans = N
andequed = R,B
) orDiagonal(C)*B
(iftrans = T,C
andequed = C,B
);rcond
, the reciprocal condition number ofA
after equilbrating;ferr
, the forward error bound for each solution vector inX
;berr
, the forward error bound for each solution vector inX
; andwork
, the reciprocal pivot growth factor.
gesvx!(A, B)
The no-equilibration, no-transpose simplification ofgesvx!
.
LinearAlgebra.LAPACK.gelsd!
—Functiongelsd!(A, B, rcond) -> (B, rnk)
Computes the least norm solution ofA * X = B
by finding theSVD
factorization ofA
, then dividing-and-conquering the problem.B
is overwritten with the solutionX
. Singular values belowrcond
will be treated as zero. Returns the solution inB
and the effective rank ofA
inrnk
.
LinearAlgebra.LAPACK.gelsy!
—Functiongelsy!(A, B, rcond) -> (B, rnk)
Computes the least norm solution ofA * X = B
by finding the fullQR
factorization ofA
, then dividing-and-conquering the problem.B
is overwritten with the solutionX
. Singular values belowrcond
will be treated as zero. Returns the solution inB
and the effective rank ofA
inrnk
.
LinearAlgebra.LAPACK.gglse!
—Functiongglse!(A, c, B, d) -> (X,res)
Solves the equationA * x = c
wherex
is subject to the equality constraintB * x = d
. Uses the formula||c - A*x||^2 = 0
to solve. ReturnsX
and the residual sum-of-squares.
LinearAlgebra.LAPACK.geev!
—Functiongeev!(jobvl, jobvr, A) -> (W, VL, VR)
Finds the eigensystem ofA
. Ifjobvl = N
, the left eigenvectors ofA
aren't computed. Ifjobvr = N
, the right eigenvectors ofA
aren't computed. Ifjobvl = V
orjobvr = V
, the corresponding eigenvectors are computed. Returns the eigenvalues inW
, the right eigenvectors inVR
, and the left eigenvectors inVL
.
LinearAlgebra.LAPACK.gesdd!
—Functiongesdd!(job, A) -> (U, S, VT)
Finds the singular value decomposition ofA
,A = U * S * V'
, using a divide and conquer approach. Ifjob = A
, all the columns ofU
and the rows ofV'
are computed. Ifjob = N
, no columns ofU
or rows ofV'
are computed. Ifjob = O
,A
is overwritten with the columns of (thin)U
and the rows of (thin)V'
. Ifjob = S
, the columns of (thin)U
and the rows of (thin)V'
are computed and returned separately.
LinearAlgebra.LAPACK.gesvd!
—Functiongesvd!(jobu, jobvt, A) -> (U, S, VT)
Finds the singular value decomposition ofA
,A = U * S * V'
. Ifjobu = A
, all the columns ofU
are computed. Ifjobvt = A
all the rows ofV'
are computed. Ifjobu = N
, no columns ofU
are computed. Ifjobvt = N
no rows ofV'
are computed. Ifjobu = O
,A
is overwritten with the columns of (thin)U
. Ifjobvt = O
,A
is overwritten with the rows of (thin)V'
. Ifjobu = S
, the columns of (thin)U
are computed and returned separately. Ifjobvt = S
the rows of (thin)V'
are computed and returned separately.jobu
andjobvt
can't both beO
.
ReturnsU
,S
, andVt
, whereS
are the singular values ofA
.
LinearAlgebra.LAPACK.ggsvd!
—Functionggsvd!(jobu, jobv, jobq, A, B) -> (U, V, Q, alpha, beta, k, l, R)
Finds the generalized singular value decomposition ofA
andB
,U'*A*Q = D1*R
andV'*B*Q = D2*R
.D1
hasalpha
on its diagonal andD2
hasbeta
on its diagonal. Ifjobu = U
, the orthogonal/unitary matrixU
is computed. Ifjobv = V
the orthogonal/unitary matrixV
is computed. Ifjobq = Q
, the orthogonal/unitary matrixQ
is computed. Ifjobu
,jobv
orjobq
isN
, that matrix is not computed. This function is only available in LAPACK versions prior to 3.6.0.
LinearAlgebra.LAPACK.ggsvd3!
—Functionggsvd3!(jobu, jobv, jobq, A, B) -> (U, V, Q, alpha, beta, k, l, R)
Finds the generalized singular value decomposition ofA
andB
,U'*A*Q = D1*R
andV'*B*Q = D2*R
.D1
hasalpha
on its diagonal andD2
hasbeta
on its diagonal. Ifjobu = U
, the orthogonal/unitary matrixU
is computed. Ifjobv = V
the orthogonal/unitary matrixV
is computed. Ifjobq = Q
, the orthogonal/unitary matrixQ
is computed. Ifjobu
,jobv
, orjobq
isN
, that matrix is not computed. This function requires LAPACK 3.6.0.
LinearAlgebra.LAPACK.geevx!
—Functiongeevx!(balanc, jobvl, jobvr, sense, A) -> (A, w, VL, VR, ilo, ihi, scale, abnrm, rconde, rcondv)
Finds the eigensystem ofA
with matrix balancing. Ifjobvl = N
, the left eigenvectors ofA
aren't computed. Ifjobvr = N
, the right eigenvectors ofA
aren't computed. Ifjobvl = V
orjobvr = V
, the corresponding eigenvectors are computed. Ifbalanc = N
, no balancing is performed. Ifbalanc = P
,A
is permuted but not scaled. Ifbalanc = S
,A
is scaled but not permuted. Ifbalanc = B
,A
is permuted and scaled. Ifsense = N
, no reciprocal condition numbers are computed. Ifsense = E
, reciprocal condition numbers are computed for the eigenvalues only. Ifsense = V
, reciprocal condition numbers are computed for the right eigenvectors only. Ifsense = B
, reciprocal condition numbers are computed for the right eigenvectors and the eigenvectors. Ifsense = E,B
, the right and left eigenvectors must be computed.
LinearAlgebra.LAPACK.ggev!
—Functionggev!(jobvl, jobvr, A, B) -> (alpha, beta, vl, vr)
Finds the generalized eigendecomposition ofA
andB
. Ifjobvl = N
, the left eigenvectors aren't computed. Ifjobvr = N
, the right eigenvectors aren't computed. Ifjobvl = V
orjobvr = V
, the corresponding eigenvectors are computed.
LinearAlgebra.LAPACK.ggev3!
—Functionggev3!(jobvl, jobvr, A, B) -> (alpha, beta, vl, vr)
Finds the generalized eigendecomposition ofA
andB
using a blocked algorithm. Ifjobvl = N
, the left eigenvectors aren't computed. Ifjobvr = N
, the right eigenvectors aren't computed. Ifjobvl = V
orjobvr = V
, the corresponding eigenvectors are computed. This function requires LAPACK 3.6.0.
LinearAlgebra.LAPACK.gtsv!
—Functiongtsv!(dl, d, du, B)
Solves the equationA * X = B
whereA
is a tridiagonal matrix withdl
on the subdiagonal,d
on the diagonal, anddu
on the superdiagonal.
OverwritesB
with the solutionX
and returns it.
LinearAlgebra.LAPACK.gttrf!
—Functiongttrf!(dl, d, du) -> (dl, d, du, du2, ipiv)
Finds theLU
factorization of a tridiagonal matrix withdl
on the subdiagonal,d
on the diagonal, anddu
on the superdiagonal.
Modifiesdl
,d
, anddu
in-place and returns them and the second superdiagonaldu2
and the pivoting vectoripiv
.
LinearAlgebra.LAPACK.gttrs!
—Functiongttrs!(trans, dl, d, du, du2, ipiv, B)
Solves the equationA * X = B
(trans = N
),transpose(A) * X = B
(trans = T
), oradjoint(A) * X = B
(trans = C
) using theLU
factorization computed bygttrf!
.B
is overwritten with the solutionX
.
LinearAlgebra.LAPACK.orglq!
—Functionorglq!(A, tau, k = length(tau))
Explicitly finds the matrixQ
of aLQ
factorization after callinggelqf!
onA
. Uses the output ofgelqf!
.A
is overwritten byQ
.
LinearAlgebra.LAPACK.orgqr!
—Functionorgqr!(A, tau, k = length(tau))
Explicitly finds the matrixQ
of aQR
factorization after callinggeqrf!
onA
. Uses the output ofgeqrf!
.A
is overwritten byQ
.
LinearAlgebra.LAPACK.orgql!
—Functionorgql!(A, tau, k = length(tau))
Explicitly finds the matrixQ
of aQL
factorization after callinggeqlf!
onA
. Uses the output ofgeqlf!
.A
is overwritten byQ
.
LinearAlgebra.LAPACK.orgrq!
—Functionorgrq!(A, tau, k = length(tau))
Explicitly finds the matrixQ
of aRQ
factorization after callinggerqf!
onA
. Uses the output ofgerqf!
.A
is overwritten byQ
.
LinearAlgebra.LAPACK.ormlq!
—Functionormlq!(side, trans, A, tau, C)
ComputesQ * C
(trans = N
),transpose(Q) * C
(trans = T
),adjoint(Q) * C
(trans = C
) forside = L
or the equivalent right-sided multiplication forside = R
usingQ
from aLQ
factorization ofA
computed usinggelqf!
.C
is overwritten.
LinearAlgebra.LAPACK.ormqr!
—Functionormqr!(side, trans, A, tau, C)
ComputesQ * C
(trans = N
),transpose(Q) * C
(trans = T
),adjoint(Q) * C
(trans = C
) forside = L
or the equivalent right-sided multiplication forside = R
usingQ
from aQR
factorization ofA
computed usinggeqrf!
.C
is overwritten.
LinearAlgebra.LAPACK.ormql!
—Functionormql!(side, trans, A, tau, C)
ComputesQ * C
(trans = N
),transpose(Q) * C
(trans = T
),adjoint(Q) * C
(trans = C
) forside = L
or the equivalent right-sided multiplication forside = R
usingQ
from aQL
factorization ofA
computed usinggeqlf!
.C
is overwritten.
LinearAlgebra.LAPACK.ormrq!
—Functionormrq!(side, trans, A, tau, C)
ComputesQ * C
(trans = N
),transpose(Q) * C
(trans = T
),adjoint(Q) * C
(trans = C
) forside = L
or the equivalent right-sided multiplication forside = R
usingQ
from aRQ
factorization ofA
computed usinggerqf!
.C
is overwritten.
LinearAlgebra.LAPACK.gemqrt!
—Functiongemqrt!(side, trans, V, T, C)
ComputesQ * C
(trans = N
),transpose(Q) * C
(trans = T
),adjoint(Q) * C
(trans = C
) forside = L
or the equivalent right-sided multiplication forside = R
usingQ
from aQR
factorization ofA
computed usinggeqrt!
.C
is overwritten.
LinearAlgebra.LAPACK.posv!
—Functionposv!(uplo, A, B) -> (A, B)
Finds the solution toA * X = B
whereA
is a symmetric or Hermitian positive definite matrix. Ifuplo = U
the upper Cholesky decomposition ofA
is computed. Ifuplo = L
the lower Cholesky decomposition ofA
is computed.A
is overwritten by its Cholesky decomposition.B
is overwritten with the solutionX
.
LinearAlgebra.LAPACK.potrf!
—Functionpotrf!(uplo, A)
Computes the Cholesky (upper ifuplo = U
, lower ifuplo = L
) decomposition of positive-definite matrixA
.A
is overwritten and returned with an info code.
LinearAlgebra.LAPACK.potri!
—Functionpotri!(uplo, A)
Computes the inverse of positive-definite matrixA
after callingpotrf!
to find its (upper ifuplo = U
, lower ifuplo = L
) Cholesky decomposition.
A
is overwritten by its inverse and returned.
LinearAlgebra.LAPACK.potrs!
—Functionpotrs!(uplo, A, B)
Finds the solution toA * X = B
whereA
is a symmetric or Hermitian positive definite matrix whose Cholesky decomposition was computed bypotrf!
. Ifuplo = U
the upper Cholesky decomposition ofA
was computed. Ifuplo = L
the lower Cholesky decomposition ofA
was computed.B
is overwritten with the solutionX
.
LinearAlgebra.LAPACK.pstrf!
—Functionpstrf!(uplo, A, tol) -> (A, piv, rank, info)
Computes the (upper ifuplo = U
, lower ifuplo = L
) pivoted Cholesky decomposition of positive-definite matrixA
with a user-set tolerancetol
.A
is overwritten by its Cholesky decomposition.
ReturnsA
, the pivotspiv
, the rank ofA
, and aninfo
code. Ifinfo = 0
, the factorization succeeded. Ifinfo = i > 0
, thenA
is indefinite or rank-deficient.
LinearAlgebra.LAPACK.ptsv!
—Functionptsv!(D, E, B)
SolvesA * X = B
for positive-definite tridiagonalA
.D
is the diagonal ofA
andE
is the off-diagonal.B
is overwritten with the solutionX
and returned.
LinearAlgebra.LAPACK.pttrf!
—Functionpttrf!(D, E)
Computes the LDLt factorization of a positive-definite tridiagonal matrix withD
as diagonal andE
as off-diagonal.D
andE
are overwritten and returned.
LinearAlgebra.LAPACK.pttrs!
—Functionpttrs!(D, E, B)
SolvesA * X = B
for positive-definite tridiagonalA
with diagonalD
and off-diagonalE
after computingA
's LDLt factorization usingpttrf!
.B
is overwritten with the solutionX
.
LinearAlgebra.LAPACK.trtri!
—Functiontrtri!(uplo, diag, A)
Finds the inverse of (upper ifuplo = U
, lower ifuplo = L
) triangular matrixA
. Ifdiag = N
,A
has non-unit diagonal elements. Ifdiag = U
, all diagonal elements ofA
are one.A
is overwritten with its inverse.
LinearAlgebra.LAPACK.trtrs!
—Functiontrtrs!(uplo, trans, diag, A, B)
SolvesA * X = B
(trans = N
),transpose(A) * X = B
(trans = T
), oradjoint(A) * X = B
(trans = C
) for (upper ifuplo = U
, lower ifuplo = L
) triangular matrixA
. Ifdiag = N
,A
has non-unit diagonal elements. Ifdiag = U
, all diagonal elements ofA
are one.B
is overwritten with the solutionX
.
LinearAlgebra.LAPACK.trcon!
—Functiontrcon!(norm, uplo, diag, A)
Finds the reciprocal condition number of (upper ifuplo = U
, lower ifuplo = L
) triangular matrixA
. Ifdiag = N
,A
has non-unit diagonal elements. Ifdiag = U
, all diagonal elements ofA
are one. Ifnorm = I
, the condition number is found in the infinity norm. Ifnorm = O
or1
, the condition number is found in the one norm.
LinearAlgebra.LAPACK.trevc!
—Functiontrevc!(side, howmny, select, T, VL = similar(T), VR = similar(T))
Finds the eigensystem of an upper triangular matrixT
. Ifside = R
, the right eigenvectors are computed. Ifside = L
, the left eigenvectors are computed. Ifside = B
, both sets are computed. Ifhowmny = A
, all eigenvectors are found. Ifhowmny = B
, all eigenvectors are found and backtransformed usingVL
andVR
. Ifhowmny = S
, only the eigenvectors corresponding to the values inselect
are computed.
LinearAlgebra.LAPACK.trrfs!
—Functiontrrfs!(uplo, trans, diag, A, B, X, Ferr, Berr) -> (Ferr, Berr)
Estimates the error in the solution toA * X = B
(trans = N
),transpose(A) * X = B
(trans = T
),adjoint(A) * X = B
(trans = C
) forside = L
, or the equivalent equations a right-handedside = R
X * A
after computingX
usingtrtrs!
. Ifuplo = U
,A
is upper triangular. Ifuplo = L
,A
is lower triangular. Ifdiag = N
,A
has non-unit diagonal elements. Ifdiag = U
, all diagonal elements ofA
are one.Ferr
andBerr
are optional inputs.Ferr
is the forward error andBerr
is the backward error, each component-wise.
LinearAlgebra.LAPACK.stev!
—Functionstev!(job, dv, ev) -> (dv, Zmat)
Computes the eigensystem for a symmetric tridiagonal matrix withdv
as diagonal andev
as off-diagonal. Ifjob = N
only the eigenvalues are found and returned indv
. Ifjob = V
then the eigenvectors are also found and returned inZmat
.
LinearAlgebra.LAPACK.stebz!
—Functionstebz!(range, order, vl, vu, il, iu, abstol, dv, ev) -> (dv, iblock, isplit)
Computes the eigenvalues for a symmetric tridiagonal matrix withdv
as diagonal andev
as off-diagonal. Ifrange = A
, all the eigenvalues are found. Ifrange = V
, the eigenvalues in the half-open interval(vl, vu]
are found. Ifrange = I
, the eigenvalues with indices betweenil
andiu
are found. Iforder = B
, eigvalues are ordered within a block. Iforder = E
, they are ordered across all the blocks.abstol
can be set as a tolerance for convergence.
LinearAlgebra.LAPACK.stegr!
—Functionstegr!(jobz, range, dv, ev, vl, vu, il, iu) -> (w, Z)
Computes the eigenvalues (jobz = N
) or eigenvalues and eigenvectors (jobz = V
) for a symmetric tridiagonal matrix withdv
as diagonal andev
as off-diagonal. Ifrange = A
, all the eigenvalues are found. Ifrange = V
, the eigenvalues in the half-open interval(vl, vu]
are found. Ifrange = I
, the eigenvalues with indices betweenil
andiu
are found. The eigenvalues are returned inw
and the eigenvectors inZ
.
LinearAlgebra.LAPACK.stein!
—Functionstein!(dv, ev_in, w_in, iblock_in, isplit_in)
Computes the eigenvectors for a symmetric tridiagonal matrix withdv
as diagonal andev_in
as off-diagonal.w_in
specifies the input eigenvalues for which to find corresponding eigenvectors.iblock_in
specifies the submatrices corresponding to the eigenvalues inw_in
.isplit_in
specifies the splitting points between the submatrix blocks.
LinearAlgebra.LAPACK.syconv!
—Functionsyconv!(uplo, A, ipiv) -> (A, work)
Converts a symmetric matrixA
(which has been factorized into a triangular matrix) into two matricesL
andD
. Ifuplo = U
,A
is upper triangular. Ifuplo = L
, it is lower triangular.ipiv
is the pivot vector from the triangular factorization.A
is overwritten byL
andD
.
LinearAlgebra.LAPACK.sysv!
—Functionsysv!(uplo, A, B) -> (B, A, ipiv)
Finds the solution toA * X = B
for symmetric matrixA
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.B
is overwritten by the solutionX
.A
is overwritten by its Bunch-Kaufman factorization.ipiv
contains pivoting information about the factorization.
LinearAlgebra.LAPACK.sytrf!
—Functionsytrf!(uplo, A) -> (A, ipiv, info)
Computes the Bunch-Kaufman factorization of a symmetric matrixA
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.
ReturnsA
, overwritten by the factorization, a pivot vectoripiv
, and the error codeinfo
which is a non-negative integer. Ifinfo
is positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo
.
sytrf!(uplo, A, ipiv) -> (A, ipiv, info)
Computes the Bunch-Kaufman factorization of a symmetric matrixA
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.
ReturnsA
, overwritten by the factorization, the pivot vectoripiv
, and the error codeinfo
which is a non-negative integer. Ifinfo
is positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo
.
LinearAlgebra.LAPACK.sytri!
—Functionsytri!(uplo, A, ipiv)
Computes the inverse of a symmetric matrixA
using the results ofsytrf!
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.A
is overwritten by its inverse.
LinearAlgebra.LAPACK.sytrs!
—Functionsytrs!(uplo, A, ipiv, B)
Solves the equationA * X = B
for a symmetric matrixA
using the results ofsytrf!
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.B
is overwritten by the solutionX
.
LinearAlgebra.LAPACK.hesv!
—Functionhesv!(uplo, A, B) -> (B, A, ipiv)
Finds the solution toA * X = B
for Hermitian matrixA
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.B
is overwritten by the solutionX
.A
is overwritten by its Bunch-Kaufman factorization.ipiv
contains pivoting information about the factorization.
LinearAlgebra.LAPACK.hetrf!
—Functionhetrf!(uplo, A) -> (A, ipiv, info)
Computes the Bunch-Kaufman factorization of a Hermitian matrixA
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.
ReturnsA
, overwritten by the factorization, a pivot vectoripiv
, and the error codeinfo
which is a non-negative integer. Ifinfo
is positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo
.
hetrf!(uplo, A, ipiv) -> (A, ipiv, info)
Computes the Bunch-Kaufman factorization of a Hermitian matrixA
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.
ReturnsA
, overwritten by the factorization, the pivot vectoripiv
, and the error codeinfo
which is a non-negative integer. Ifinfo
is positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo
.
LinearAlgebra.LAPACK.hetri!
—Functionhetri!(uplo, A, ipiv)
Computes the inverse of a Hermitian matrixA
using the results ofsytrf!
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.A
is overwritten by its inverse.
LinearAlgebra.LAPACK.hetrs!
—Functionhetrs!(uplo, A, ipiv, B)
Solves the equationA * X = B
for a Hermitian matrixA
using the results ofsytrf!
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.B
is overwritten by the solutionX
.
LinearAlgebra.LAPACK.syev!
—Functionsyev!(jobz, uplo, A)
Finds the eigenvalues (jobz = N
) or eigenvalues and eigenvectors (jobz = V
) of a symmetric matrixA
. Ifuplo = U
, the upper triangle ofA
is used. Ifuplo = L
, the lower triangle ofA
is used.
LinearAlgebra.LAPACK.syevr!
—Functionsyevr!(jobz, range, uplo, A, vl, vu, il, iu, abstol) -> (W, Z)
Finds the eigenvalues (jobz = N
) or eigenvalues and eigenvectors (jobz = V
) of a symmetric matrixA
. Ifuplo = U
, the upper triangle ofA
is used. Ifuplo = L
, the lower triangle ofA
is used. Ifrange = A
, all the eigenvalues are found. Ifrange = V
, the eigenvalues in the half-open interval(vl, vu]
are found. Ifrange = I
, the eigenvalues with indices betweenil
andiu
are found.abstol
can be set as a tolerance for convergence.
The eigenvalues are returned inW
and the eigenvectors inZ
.
LinearAlgebra.LAPACK.syevd!
—Functionsyevd!(jobz, uplo, A)
Finds the eigenvalues (jobz = N
) or eigenvalues and eigenvectors (jobz = V
) of a symmetric matrixA
. Ifuplo = U
, the upper triangle ofA
is used. Ifuplo = L
, the lower triangle ofA
is used.
Use the divide-and-conquer method, instead of the QR iteration used bysyev!
or multiple relatively robust representations used bysyevr!
. See James W. Demmel et al, SIAM J. Sci. Comput. 30, 3, 1508 (2008) for a comparison of the accuracy and performatce of different methods.
LinearAlgebra.LAPACK.sygvd!
—Functionsygvd!(itype, jobz, uplo, A, B) -> (w, A, B)
Finds the generalized eigenvalues (jobz = N
) or eigenvalues and eigenvectors (jobz = V
) of a symmetric matrixA
and symmetric positive-definite matrixB
. Ifuplo = U
, the upper triangles ofA
andB
are used. Ifuplo = L
, the lower triangles ofA
andB
are used. Ifitype = 1
, the problem to solve isA * x = lambda * B * x
. Ifitype = 2
, the problem to solve isA * B * x = lambda * x
. Ifitype = 3
, the problem to solve isB * A * x = lambda * x
.
LinearAlgebra.LAPACK.bdsqr!
—Functionbdsqr!(uplo, d, e_, Vt, U, C) -> (d, Vt, U, C)
Computes the singular value decomposition of a bidiagonal matrix withd
on the diagonal ande_
on the off-diagonal. Ifuplo = U
,e_
is the superdiagonal. Ifuplo = L
,e_
is the subdiagonal. Can optionally also compute the productQ' * C
.
Returns the singular values ind
, and the matrixC
overwritten withQ' * C
.
LinearAlgebra.LAPACK.bdsdc!
—Functionbdsdc!(uplo, compq, d, e_) -> (d, e, u, vt, q, iq)
Computes the singular value decomposition of a bidiagonal matrix withd
on the diagonal ande_
on the off-diagonal using a divide and conqueq method. Ifuplo = U
,e_
is the superdiagonal. Ifuplo = L
,e_
is the subdiagonal. Ifcompq = N
, only the singular values are found. Ifcompq = I
, the singular values and vectors are found. Ifcompq = P
, the singular values and vectors are found in compact form. Only works for real types.
Returns the singular values ind
, and ifcompq = P
, the compact singular vectors iniq
.
LinearAlgebra.LAPACK.gecon!
—Functiongecon!(normtype, A, anorm)
Finds the reciprocal condition number of matrixA
. Ifnormtype = I
, the condition number is found in the infinity norm. Ifnormtype = O
or1
, the condition number is found in the one norm.A
must be the result ofgetrf!
andanorm
is the norm ofA
in the relevant norm.
LinearAlgebra.LAPACK.gehrd!
—Functiongehrd!(ilo, ihi, A) -> (A, tau)
Converts a matrixA
to Hessenberg form. IfA
is balanced withgebal!
thenilo
andihi
are the outputs ofgebal!
. Otherwise they should beilo = 1
andihi = size(A,2)
.tau
contains the elementary reflectors of the factorization.
LinearAlgebra.LAPACK.orghr!
—Functionorghr!(ilo, ihi, A, tau)
Explicitly findsQ
, the orthogonal/unitary matrix fromgehrd!
.ilo
,ihi
,A
, andtau
must correspond to the input/output togehrd!
.
LinearAlgebra.LAPACK.gees!
—Functiongees!(jobvs, A) -> (A, vs, w)
Computes the eigenvalues (jobvs = N
) or the eigenvalues and Schur vectors (jobvs = V
) of matrixA
.A
is overwritten by its Schur form.
ReturnsA
,vs
containing the Schur vectors, andw
, containing the eigenvalues.
LinearAlgebra.LAPACK.gges!
—Functiongges!(jobvsl, jobvsr, A, B) -> (A, B, alpha, beta, vsl, vsr)
Computes the generalized eigenvalues, generalized Schur form, left Schur vectors (jobsvl = V
), or right Schur vectors (jobvsr = V
) ofA
andB
.
The generalized eigenvalues are returned inalpha
andbeta
. The left Schur vectors are returned invsl
and the right Schur vectors are returned invsr
.
LinearAlgebra.LAPACK.gges3!
—Functiongges3!(jobvsl, jobvsr, A, B) -> (A, B, alpha, beta, vsl, vsr)
Computes the generalized eigenvalues, generalized Schur form, left Schur vectors (jobsvl = V
), or right Schur vectors (jobvsr = V
) ofA
andB
using a blocked algorithm. This function requires LAPACK 3.6.0.
The generalized eigenvalues are returned inalpha
andbeta
. The left Schur vectors are returned invsl
and the right Schur vectors are returned invsr
.
LinearAlgebra.LAPACK.trexc!
—Functiontrexc!(compq, ifst, ilst, T, Q) -> (T, Q)trexc!(ifst, ilst, T, Q) -> (T, Q)
Reorder the Schur factorizationT
of a matrix, such that the diagonal block ofT
with row indexifst
is moved to row indexilst
. Ifcompq = V
, the Schur vectorsQ
are reordered. Ifcompq = N
they are not modified. The 4-arg method calls the 5-arg method withcompq = V
.
LinearAlgebra.LAPACK.trsen!
—Functiontrsen!(job, compq, select, T, Q) -> (T, Q, w, s, sep)trsen!(select, T, Q) -> (T, Q, w, s, sep)
Reorder the Schur factorization of a matrix and optionally finds reciprocal condition numbers. Ifjob = N
, no condition numbers are found. Ifjob = E
, only the condition number for this cluster of eigenvalues is found. Ifjob = V
, only the condition number for the invariant subspace is found. Ifjob = B
then the condition numbers for the cluster and subspace are found. Ifcompq = V
the Schur vectorsQ
are updated. Ifcompq = N
the Schur vectors are not modified.select
determines which eigenvalues are in the cluster. The 3-arg method calls the 5-arg method withjob = N
andcompq = V
.
ReturnsT
,Q
, reordered eigenvalues inw
, the condition number of the cluster of eigenvaluess
, and the condition number of the invariant subspacesep
.
LinearAlgebra.LAPACK.tgsen!
—Functiontgsen!(select, S, T, Q, Z) -> (S, T, alpha, beta, Q, Z)
Reorders the vectors of a generalized Schur decomposition.select
specifies the eigenvalues in each cluster.
LinearAlgebra.LAPACK.trsyl!
—Functiontrsyl!(transa, transb, A, B, C, isgn=1) -> (C, scale)
Solves the Sylvester matrix equationA * X +/- X * B = scale*C
whereA
andB
are both quasi-upper triangular. Iftransa = N
,A
is not modified. Iftransa = T
,A
is transposed. Iftransa = C
,A
is conjugate transposed. Similarly fortransb
andB
. Ifisgn = 1
, the equationA * X + X * B = scale * C
is solved. Ifisgn = -1
, the equationA * X - X * B = scale * C
is solved.
ReturnsX
(overwritingC
) andscale
.
LinearAlgebra.LAPACK.hseqr!
—Functionhseqr!(job, compz, ilo, ihi, H, Z) -> (H, Z, w)
Computes all eigenvalues and (optionally) the Schur factorization of a matrix reduced to Hessenberg form. IfH
is balanced withgebal!
thenilo
andihi
are the outputs ofgebal!
. Otherwise they should beilo = 1
andihi = size(H,2)
.tau
contains the elementary reflectors of the factorization.
Settings
This document was generated withDocumenter.jl version 1.8.0 onWednesday 9 July 2025. Using Julia version 1.11.6.