Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Triangular matrix

From Wikipedia, the free encyclopedia
Special kind of square matrix
Not to be confused with atriangular array, a related concept.
For the rings, seetriangular matrix ring.
"Triangularization" redirects here. For the geometric process, seeTriangulation.

In mathematics, atriangular matrix is a special kind ofsquare matrix. A square matrix is calledlower triangular if all the entriesabove themain diagonal are zero. Similarly, a square matrix is calledupper triangular if all the entriesbelow the main diagonal are zero.

Because matrix equations with triangular matrices are easier to solve, they are very important innumerical analysis. By theLU decomposition algorithm, aninvertible matrix may be written as theproduct of a lower triangular matrixL and an upper triangular matrixUif and only if all its leading principalminors are non-zero.

Description

[edit]

A matrix of the form

L=[1,102,12,23,13,2n,1n,2n,n1n,n]{\displaystyle L={\begin{bmatrix}\ell _{1,1}&&&&0\\\ell _{2,1}&\ell _{2,2}&&&\\\ell _{3,1}&\ell _{3,2}&\ddots &&\\\vdots &\vdots &\ddots &\ddots &\\\ell _{n,1}&\ell _{n,2}&\ldots &\ell _{n,n-1}&\ell _{n,n}\end{bmatrix}}}

is called alower triangular matrix orleft triangular matrix, and analogously a matrix of the form

U=[u1,1u1,2u1,3u1,nu2,2u2,3u2,nun1,n0un,n]{\displaystyle U={\begin{bmatrix}u_{1,1}&u_{1,2}&u_{1,3}&\ldots &u_{1,n}\\&u_{2,2}&u_{2,3}&\ldots &u_{2,n}\\&&\ddots &\ddots &\vdots \\&&&\ddots &u_{n-1,n}\\0&&&&u_{n,n}\end{bmatrix}}}

is called anupper triangular matrix orright triangular matrix. A lower or left triangular matrix is commonly denoted with the variableL, and an upper or right triangular matrix is commonly denoted with the variableU orR.

A matrix that is both upper and lower triangular isdiagonal. Matrices that aresimilar to triangular matrices are calledtriangularisable.

A non-square (or sometimes any) matrix with zeros above (below) the diagonal is called a lower (upper) trapezoidal matrix. The non-zero entries form the shape of atrapezoid.

Examples

[edit]

The matrix

[10029604969]{\displaystyle {\begin{bmatrix}1&0&0\\2&96&0\\4&9&69\end{bmatrix}}}

is the lower triangular for the non symmetric matrix:

[15829694969]{\displaystyle {\begin{bmatrix}1&5&8\\2&96&9\\4&9&69\end{bmatrix}}}

and

[141069001]{\displaystyle {\begin{bmatrix}1&4&1\\0&6&9\\0&0&1\end{bmatrix}}}

is the upper triangular for the non symmetric matrix:

[141996940881]{\displaystyle {\begin{bmatrix}1&4&1\\99&6&9\\40&88&1\end{bmatrix}}}

Forward and back substitution

[edit]

A matrix equation in the formLx=b{\displaystyle L\mathbf {x} =\mathbf {b} } orUx=b{\displaystyle U\mathbf {x} =\mathbf {b} } is very easy to solve by an iterative process calledforward substitution for lower triangular matrices and analogouslyback substitution for upper triangular matrices. The process is so called because for lower triangular matrices, one first computesx1{\displaystyle x_{1}}, then substitutes thatforward into thenext equation to solve forx2{\displaystyle x_{2}}, and repeats through toxn{\displaystyle x_{n}}. In an upper triangular matrix, one worksbackwards, first computingxn{\displaystyle x_{n}}, then substituting thatback into theprevious equation to solve forxn1{\displaystyle x_{n-1}}, and repeating throughx1{\displaystyle x_{1}}.

Notice that this does not require inverting the matrix.

Forward substitution

[edit]

The matrix equationLx =b can be written as a system of linear equations

1,1x1=b12,1x1+2,2x2=b2m,1x1+m,2x2++m,mxm=bm{\displaystyle {\begin{matrix}\ell _{1,1}x_{1}&&&&&&&=&b_{1}\\\ell _{2,1}x_{1}&+&\ell _{2,2}x_{2}&&&&&=&b_{2}\\\vdots &&\vdots &&\ddots &&&&\vdots \\\ell _{m,1}x_{1}&+&\ell _{m,2}x_{2}&+&\dotsb &+&\ell _{m,m}x_{m}&=&b_{m}\\\end{matrix}}}

Observe that the first equation (1,1x1=b1{\displaystyle \ell _{1,1}x_{1}=b_{1}}) only involvesx1{\displaystyle x_{1}}, and thus one can solve forx1{\displaystyle x_{1}} directly. The second equation only involvesx1{\displaystyle x_{1}} andx2{\displaystyle x_{2}}, and thus can be solved once one substitutes in the already solved value forx1{\displaystyle x_{1}}. Continuing in this way, thek{\displaystyle k}-th equation only involvesx1,,xk{\displaystyle x_{1},\dots ,x_{k}}, and one can solve forxk{\displaystyle x_{k}} using the previously solved values forx1,,xk1{\displaystyle x_{1},\dots ,x_{k-1}}. The resulting formulas are:

x1=b11,1,x2=b22,1x12,2,  xm=bmi=1m1m,ixim,m.{\displaystyle {\begin{aligned}x_{1}&={\frac {b_{1}}{\ell _{1,1}}},\\x_{2}&={\frac {b_{2}-\ell _{2,1}x_{1}}{\ell _{2,2}}},\\&\ \ \vdots \\x_{m}&={\frac {b_{m}-\sum _{i=1}^{m-1}\ell _{m,i}x_{i}}{\ell _{m,m}}}.\end{aligned}}}

A matrix equation with an upper triangular matrixU can be solved in an analogous way, only working backwards.

Applications

[edit]

Forward substitution is used in financialbootstrapping to construct ayield curve.

Properties

[edit]

Thetranspose of an upper triangular matrix is a lower triangular matrix and vice versa.

A matrix which is both symmetric and triangular is diagonal.In a similar vein, a matrix which is bothnormal (meaningA*A =AA*, whereA* is theconjugate transpose) and triangular is also diagonal. This can be seen by looking at the diagonal entries ofA*A andAA*.

Thedeterminant andpermanent of a triangular matrix equal the product of the diagonal entries, as can be checked by direct computation.

In fact more is true: theeigenvalues of a triangular matrix are exactly its diagonal entries. Moreover, each eigenvalue occurs exactlyk times on the diagonal, wherek is itsalgebraic multiplicity, that is, itsmultiplicity as a root of thecharacteristic polynomialpA(x)=det(xIA){\displaystyle p_{A}(x)=\det(xI-A)} ofA. In other words, the characteristic polynomial of a triangularn×n matrixA is exactly

pA(x)=(xa11)(xa22)(xann){\displaystyle p_{A}(x)=(x-a_{11})(x-a_{22})\cdots (x-a_{nn})},

that is, the unique degreen polynomial whose roots are the diagonal entries ofA (with multiplicities).To see this, observe thatxIA{\displaystyle xI-A} is also triangular and hence its determinantdet(xIA){\displaystyle \det(xI-A)} is the product of its diagonal entries(xa11)(xa22)(xann){\displaystyle (x-a_{11})(x-a_{22})\cdots (x-a_{nn})}.[1]

Special forms

[edit]

Unitriangular matrix

[edit]

If the entries on themain diagonal of a (lower or upper) triangular matrix are all 1, the matrix is called (lower or upper)unitriangular.

Other names used for these matrices areunit (lower or upper)triangular, or very rarelynormed (lower or upper)triangular. However, aunit triangular matrix is not the same astheunit matrix, and anormed triangular matrix has nothing to do with the notion ofmatrix norm.

All finite unitriangular matrices areunipotent.

Strictly triangular matrix

[edit]

If all of the entries on the main diagonal of a (lower or upper) triangular matrix are also 0, the matrix is calledstrictly (lower or upper)triangular.

All finite strictly triangular matrices arenilpotent of index at mostn as a consequence of theCayley–Hamilton theorem.

Atomic triangular matrix

[edit]
Main article:Frobenius matrix

Anatomic (lower or upper)triangular matrix is a special form of unitriangular matrix, where all of theoff-diagonal elements are zero, except for the entries in a single column. Such a matrix is also called aFrobenius matrix, aGauss matrix, or aGauss transformation matrix.

Block triangular matrix

[edit]
Main article:Block matrix

A block triangular matrix is ablock matrix (partitioned matrix) that is a triangular matrix.

Upper block triangular

[edit]

A matrixA{\displaystyle A} isupper block triangular if

A=[A11A12A1k0A22A2k00Akk]{\displaystyle A={\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1k}\\0&A_{22}&\cdots &A_{2k}\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &A_{kk}\end{bmatrix}}},

whereAijFni×nj{\displaystyle A_{ij}\in \mathbb {F} ^{n_{i}\times n_{j}}} for alli,j=1,,k{\displaystyle i,j=1,\ldots ,k}.[2]

Lower block triangular

[edit]

A matrixA{\displaystyle A} islower block triangular if

A=[A1100A21A220Ak1Ak2Akk]{\displaystyle A={\begin{bmatrix}A_{11}&0&\cdots &0\\A_{21}&A_{22}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\A_{k1}&A_{k2}&\cdots &A_{kk}\end{bmatrix}}},

whereAijFni×nj{\displaystyle A_{ij}\in \mathbb {F} ^{n_{i}\times n_{j}}} for alli,j=1,,k{\displaystyle i,j=1,\ldots ,k}.[2]

Triangularisability

[edit]

A matrix that issimilar to a triangular matrix is referred to astriangularizable. Abstractly, this is equivalent to stabilizing aflag: upper triangular matrices are precisely those that preserve thestandard flag, which is given by the standard ordered basis(e1,,en){\displaystyle (e_{1},\ldots ,e_{n})} and the resulting flag0<e1<e1,e2<<e1,,en=Kn.{\displaystyle 0<\left\langle e_{1}\right\rangle <\left\langle e_{1},e_{2}\right\rangle <\cdots <\left\langle e_{1},\ldots ,e_{n}\right\rangle =K^{n}.} All flags are conjugate (as thegeneral linear group acts transitively on bases), so any matrix that stabilises a flag is similar to one that stabilizes the standard flag.

Any complex square matrix is triangularizable.[1] In fact, a matrixA over afield containing all of the eigenvalues ofA (for example, any matrix over analgebraically closed field) is similar to a triangular matrix. This can be proven by using induction on the fact thatA has an eigenvector, by taking the quotient space by the eigenvector and inducting to show thatA stabilizes a flag, and is thus triangularizable with respect to a basis for that flag.

A more precise statement is given by theJordan normal form theorem, which states that in this situation,A is similar to an upper triangular matrix of a very particular form. The simpler triangularization result is often sufficient however, and in any case used in proving the Jordan normal form theorem.[1][3]

In the case of complex matrices, it is possible to say more about triangularization, namely, that any square matrixA has aSchur decomposition. This means thatA is unitarily equivalent (i.e. similar, using aunitary matrix as change of basis) to an upper triangular matrix; this follows by taking an Hermitian basis for the flag.

Simultaneous triangularisability

[edit]
See also:Simultaneously diagonalizable

A set of matricesA1,,Ak{\displaystyle A_{1},\ldots ,A_{k}} are said to besimultaneously triangularisable if there is a basis under which they are all upper triangular; equivalently, if they are upper triangularizable by a single similarity matrixP. Such a set of matrices is more easily understood by considering the algebra of matrices it generates, namely all polynomials in theAi,{\displaystyle A_{i},} denotedK[A1,,Ak].{\displaystyle K[A_{1},\ldots ,A_{k}].} Simultaneous triangularizability means that this algebra is conjugate into the Lie subalgebra of upper triangular matrices, and is equivalent to this algebra being a Lie subalgebra of aBorel subalgebra.

The basic result is that (over an algebraically closed field), thecommuting matricesA,B{\displaystyle A,B} or more generallyA1,,Ak{\displaystyle A_{1},\ldots ,A_{k}} are simultaneously triangularizable. This can be proven by first showing that commuting matrices have a common eigenvector, and then inducting on dimension as before. This was proven by Frobenius, starting in 1878 for a commuting pair, as discussed atcommuting matrices. As for a single matrix, over the complex numbers these can be triangularized by unitary matrices.

The fact that commuting matrices have a common eigenvector can be interpreted as a result ofHilbert's Nullstellensatz: commuting matrices form a commutative algebraK[A1,,Ak]{\displaystyle K[A_{1},\ldots ,A_{k}]} overK[x1,,xk]{\displaystyle K[x_{1},\ldots ,x_{k}]} which can be interpreted as a variety ink-dimensional affine space, and the existence of a (common) eigenvalue (and hence a common eigenvector) corresponds to this variety having a point (being non-empty), which is the content of the (weak) Nullstellensatz.[citation needed] In algebraic terms, these operators correspond to analgebra representation of the polynomial algebra ink variables.

This is generalized byLie's theorem, which shows that any representation of asolvable Lie algebra is simultaneously upper triangularizable, the case of commuting matrices being theabelian Lie algebra case, abelian being a fortiori solvable.

More generally and precisely, a set of matricesA1,,Ak{\displaystyle A_{1},\ldots ,A_{k}} is simultaneously triangularisable if and only if the matrixp(A1,,Ak)[Ai,Aj]{\displaystyle p(A_{1},\ldots ,A_{k})[A_{i},A_{j}]} isnilpotent for all polynomialsp inknon-commuting variables, where[Ai,Aj]{\displaystyle [A_{i},A_{j}]} is thecommutator; for commutingAi{\displaystyle A_{i}} the commutator vanishes so this holds. This was proven by Drazin, Dungey, and Gruenberg in 1951;[4] a brief proof is given by Prasolov in 1994.[5] One direction is clear: if the matrices are simultaneously triangularisable, then[Ai,Aj]{\displaystyle [A_{i},A_{j}]} isstrictly upper triangularizable (hence nilpotent), which is preserved by multiplication by anyAk{\displaystyle A_{k}} or combination thereof – it will still have 0s on the diagonal in the triangularizing basis.

Algebras of triangular matrices

[edit]
Binary lower unitriangularToeplitz matrices, multiplied usingF2 operations. They form theCayley table ofZ4 and correspond topowers of the 4-bit Gray code permutation.

Upper triangularity is preserved by many operations:

  • The sum of two upper triangular matrices is upper triangular.
  • The product of two upper triangular matrices is upper triangular.
  • Theinverse of an upper triangular matrix, if it exists, is upper triangular.
  • The product of an upper triangular matrix and a scalar is upper triangular.

Together these facts mean that the upper triangular matrices form asubalgebra of theassociative algebra of square matrices for a given size. Additionally, this also shows that the upper triangular matrices can be viewed as a Lie subalgebra of theLie algebra of square matrices of a fixed size, where theLie bracket [a,b] given by thecommutatorab − ba. The Lie algebra of all upper triangular matrices is asolvable Lie algebra. It is often referred to as aBorel subalgebra of the Lie algebra of all square matrices.

All these results hold ifupper triangular is replaced bylower triangular throughout; in particular the lower triangular matrices also form a Lie algebra. However, operations mixing upper and lower triangular matrices do not in general produce triangular matrices. For instance, the sum of an upper and a lower triangular matrix can be any matrix; the product of a lower triangular with an upper triangular matrix is not necessarily triangular either.

The set of unitriangular matrices forms aLie group.

The set of strictly upper (or lower) triangular matrices forms anilpotent Lie algebra, denotedn.{\displaystyle {\mathfrak {n}}.} This algebra is thederived Lie algebra ofb{\displaystyle {\mathfrak {b}}}, the Lie algebra of all upper triangular matrices; in symbols,n=[b,b].{\displaystyle {\mathfrak {n}}=[{\mathfrak {b}},{\mathfrak {b}}].} In addition,n{\displaystyle {\mathfrak {n}}} is the Lie algebra of the Lie group of unitriangular matrices.

In fact, byEngel's theorem, any finite-dimensional nilpotent Lie algebra is conjugate to a subalgebra of the strictly upper triangular matrices, that is to say, a finite-dimensional nilpotent Lie algebra is simultaneously strictly upper triangularizable.

Algebras of upper triangular matrices have a natural generalization infunctional analysis which yieldsnest algebras onHilbert spaces.

See also:Affine group

Borel subgroups and Borel subalgebras

[edit]
Main articles:Borel subgroup andBorel subalgebra

The set of invertible triangular matrices of a given kind (lower or upper) forms agroup, indeed aLie group, which is a subgroup of thegeneral linear group of all invertible matrices. A triangular matrix is invertible precisely when its diagonal entries are invertible (non-zero).

Over the real numbers, this group is disconnected, having2n{\displaystyle 2^{n}} components accordingly as each diagonal entry is positive or negative. The identity component is invertible triangular matrices with positive entries on the diagonal, and the group of all invertible triangular matrices is asemidirect product of this group and the group ofdiagonal matrices with±1{\displaystyle \pm 1} on the diagonal, corresponding to the components.

TheLie algebra of the Lie group of invertible upper triangular matrices is the set of all upper triangular matrices, not necessarily invertible, and is asolvable Lie algebra. These are, respectively, the standardBorel subgroupB of the Lie group GLn and the standardBorel subalgebrab{\displaystyle {\mathfrak {b}}} of the Lie algebra gln.

The upper triangular matrices are precisely those that stabilize thestandard flag. The invertible ones among them form a subgroup of the general linear group, whose conjugate subgroups are those defined as the stabilizer of some (other) complete flag. These subgroups areBorel subgroups. The group of invertible lower triangular matrices is such a subgroup, since it is the stabilizer of the standard flag associated to the standard basis in reverse order.

The stabilizer of a partial flag obtained by forgetting some parts of the standard flag can be described as a set of block upper triangular matrices (but its elements arenot all triangular matrices). The conjugates of such a group are the subgroups defined as the stabilizer of some partial flag. These subgroups are called parabolic subgroups.

Examples

[edit]

The group of 2×2 upper unitriangular matrices isisomorphic to theadditive group of the field of scalars; in the case of complex numbers it corresponds to a group formed of parabolicMöbius transformations; the 3×3 upper unitriangular matrices form theHeisenberg group.

See also

[edit]

References

[edit]
  1. ^abcAxler, Sheldon Jay (1997).Linear Algebra Done Right (2nd ed.). New York: Springer. pp. 86–87, 169.ISBN 0-387-22595-1.OCLC 54850562.
  2. ^abBernstein, Dennis S. (2009).Matrix mathematics: theory, facts, and formulas (2 ed.). Princeton, NJ: Princeton University Press. p. 168.ISBN 978-0-691-14039-1.
  3. ^Herstein, I. N. (1975).Topics in Algebra (2nd ed.). New York: Wiley. pp. 285–290.ISBN 0-471-01090-1.OCLC 3307396.
  4. ^Drazin, M. P.; Dungey, J. W.; Gruenberg, K. W. (1951)."Some Theorems on Commutative Matrices".Journal of the London Mathematical Society.26 (3):221–228.doi:10.1112/jlms/s1-26.3.221.
  5. ^Prasolov, V. V. (1994).Problems and Theorems in Linear Algebra. Simeon Ivanov. Providence, R.I.: American Mathematical Society. pp. 178–179.ISBN 9780821802366.OCLC 30076024.
Matrix classes
Explicitly constrained entries
Constant
Conditions oneigenvalues or eigenvectors
Satisfying conditions onproducts orinverses
With specific applications
Used instatistics
Used ingraph theory
Used in science and engineering
Related terms
Retrieved from "https://en.wikipedia.org/w/index.php?title=Triangular_matrix&oldid=1335124257"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp