Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Symmetric matrix

From Wikipedia, the free encyclopedia
Matrix equal to its transpose
This article is about a matrix symmetric about its diagonal. For a matrix symmetric about its center, seeCentrosymmetric matrix.
For matrices with symmetry over thecomplex number field, seeHermitian matrix.

Symmetry of a 5×5 matrix

Inlinear algebra, asymmetric matrix is asquare matrix that is equal to itstranspose. Formally,

A is symmetricA=AT.{\displaystyle A{\text{ is symmetric}}\iff A=A^{\textsf {T}}.}

Because equal matrices have equal dimensions, only square matrices can be symmetric.

The entries of a symmetric matrix are symmetric with respect to themain diagonal. So ifaij{\displaystyle a_{ij}} denotes the entry in thei{\displaystyle i}th row andj{\displaystyle j}th column then

A is symmetric for every i,j,aji=aij{\displaystyle A{\text{ is symmetric}}\iff {\text{ for every }}i,j,\quad a_{ji}=a_{ij}}

for all indicesi{\displaystyle i} andj.{\displaystyle j.}

Every squarediagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly incharacteristic different from 2, each diagonal element of askew-symmetric matrix must be zero, since each is its own negative.

In linear algebra, areal symmetric matrix represents aself-adjoint operator[1] represented in anorthonormal basis over arealinner product space. The corresponding object for acomplex inner product space is aHermitian matrix with complex-valued entries, which is equal to itsconjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.

Example

[edit]

The following3×3{\displaystyle 3\times 3} matrix is symmetric:A=[173745352]{\displaystyle A={\begin{bmatrix}1&7&3\\7&4&5\\3&5&2\end{bmatrix}}}SinceA=AT{\displaystyle A=A^{\textsf {T}}}.

Properties

[edit]

Basic properties

[edit]

Decomposition into symmetric and skew-symmetric

[edit]

Any square matrix can uniquely be written as sum of a symmetric and a skew-symmetric matrix. This decomposition is known as the Toeplitz decomposition. LetMatn{\displaystyle {\mbox{Mat}}_{n}} denote the space ofn×n{\displaystyle n\times n} matrices. IfSymn{\displaystyle {\mbox{Sym}}_{n}} denotes the space ofn×n{\displaystyle n\times n} symmetric matrices andSkewn{\displaystyle {\mbox{Skew}}_{n}} the space ofn×n{\displaystyle n\times n} skew-symmetric matrices thenMatn=Symn+Skewn{\displaystyle {\mbox{Mat}}_{n}={\mbox{Sym}}_{n}+{\mbox{Skew}}_{n}} andSymnSkewn={0}{\displaystyle {\mbox{Sym}}_{n}\cap {\mbox{Skew}}_{n}=\{0\}}, i.e.Matn=SymnSkewn,{\displaystyle {\mbox{Mat}}_{n}={\mbox{Sym}}_{n}\oplus {\mbox{Skew}}_{n},}where{\displaystyle \oplus } denotes thedirect sum. LetXMatn{\displaystyle X\in {\mbox{Mat}}_{n}} thenX=12(X+XT)+12(XXT).{\displaystyle X={\frac {1}{2}}\left(X+X^{\textsf {T}}\right)+{\frac {1}{2}}\left(X-X^{\textsf {T}}\right).}

Notice that12(X+XT)Symn{\textstyle {\frac {1}{2}}\left(X+X^{\textsf {T}}\right)\in {\mbox{Sym}}_{n}} and12(XXT)Skewn{\textstyle {\frac {1}{2}}\left(X-X^{\textsf {T}}\right)\in \mathrm {Skew} _{n}}. This is true for everysquare matrixX{\displaystyle X} with entries from anyfield whosecharacteristic is different from 2.

A symmetricn×n{\displaystyle n\times n} matrix is determined by12n(n+1){\displaystyle {\tfrac {1}{2}}n(n+1)} scalars (the number of entries on or above themain diagonal). Similarly, askew-symmetric matrix is determined by12n(n1){\displaystyle {\tfrac {1}{2}}n(n-1)} scalars (the number of entries above the main diagonal).

Matrix congruent to a symmetric matrix

[edit]

Any matrixcongruent to a symmetric matrix is again symmetric: ifX{\displaystyle X} is a symmetric matrix, then so isAXAT{\displaystyle AXA^{\mathrm {T} }} for any matrixA{\displaystyle A}.

Symmetry implies normality

[edit]

A (real-valued) symmetric matrix is necessarily anormal matrix.

Real symmetric matrices

[edit]

Denote by,{\displaystyle \langle \cdot ,\cdot \rangle } the standardinner product onRn{\displaystyle \mathbb {R} ^{n}}. The realn×n{\displaystyle n\times n} matrixA{\displaystyle A} is symmetric if and only ifAx,y=x,Ayx,yRn.{\displaystyle \langle Ax,y\rangle =\langle x,Ay\rangle \quad \forall x,y\in \mathbb {R} ^{n}.}

Since this definition is independent of the choice ofbasis, symmetry is a property that depends only on thelinear operator A and a choice ofinner product. This characterization of symmetry is useful, for example, indifferential geometry, for eachtangent space to amanifold may be endowed with an inner product, giving rise to what is called aRiemannian manifold. Another area where this formulation is used is inHilbert spaces.

The finite-dimensionalspectral theorem says that any symmetric matrix whose entries arereal can bediagonalized by anorthogonal matrix. More explicitly: For every real symmetric matrixA{\displaystyle A} there exists a real orthogonal matrixQ{\displaystyle Q} such thatD=QTAQ{\displaystyle D=Q^{\mathrm {T} }AQ} is adiagonal matrix. Every real symmetric matrix is thus,up to choice of anorthonormal basis, a diagonal matrix.

IfA{\displaystyle A} andB{\displaystyle B} aren×n{\displaystyle n\times n} real symmetric matrices that commute, then they can be simultaneously diagonalized by an orthogonal matrix:[2] there exists a basis ofRn{\displaystyle \mathbb {R} ^{n}} such that every element of the basis is aneigenvector for bothA{\displaystyle A} andB{\displaystyle B}.

Every real symmetric matrix isHermitian, and therefore all itseigenvalues are real. (In fact, the eigenvalues are the entries in the diagonal matrixD{\displaystyle D} (above), and thereforeD{\displaystyle D} is uniquely determined byA{\displaystyle A} up to the order of its entries.) Essentially, the property of being symmetric for real matrices corresponds to the property of being Hermitian for complex matrices.

Complex symmetric matrices

[edit]

A complex symmetric matrix can be 'diagonalized' using aunitary matrix: thus ifA{\displaystyle A} is a complex symmetric matrix, there is a unitary matrixU{\displaystyle U} such thatUAUT{\displaystyle UAU^{\mathrm {T} }} is a real diagonal matrix with non-negative entries. This result is referred to as theAutonne–Takagi factorization. It was originally proved byLéon Autonne (1915) andTeiji Takagi (1925) and rediscovered with different proofs by several other mathematicians.[3][4] In fact, the matrixB=AA{\displaystyle B=A^{\dagger }A} is Hermitian andpositive semi-definite, so there is a unitary matrixV{\displaystyle V} such thatVBV{\displaystyle V^{\dagger }BV} is diagonal with non-negative real entries. ThusC=VTAV{\displaystyle C=V^{\mathrm {T} }AV} is complex symmetric withCC{\displaystyle C^{\dagger }C} real. WritingC=X+iY{\displaystyle C=X+iY} withX{\displaystyle X} andY{\displaystyle Y} real symmetric matrices,CC=X2+Y2+i(XYYX){\displaystyle C^{\dagger }C=X^{2}+Y^{2}+i(XY-YX)}. ThusXY=YX{\displaystyle XY=YX}. SinceX{\displaystyle X} andY{\displaystyle Y} commute, there is a real orthogonal matrixW{\displaystyle W} such that bothWXWT{\displaystyle WXW^{\mathrm {T} }} andWYWT{\displaystyle WYW^{\mathrm {T} }} are diagonal. SettingU=WVT{\displaystyle U=WV^{\mathrm {T} }} (a unitary matrix), the matrixUAUT{\displaystyle UAU^{\mathrm {T} }} is complex diagonal. Pre-multiplyingU{\displaystyle U} by a suitable diagonal unitary matrix (which preserves unitarity ofU{\displaystyle U}), the diagonal entries ofUAUT{\displaystyle UAU^{\mathrm {T} }} can be made to be real and non-negative as desired. To construct this matrix, we express the diagonal matrix asUAUT=diag(r1eiθ1,r2eiθ2,,rneiθn){\displaystyle UAU^{\mathrm {T} }=\operatorname {diag} (r_{1}e^{i\theta _{1}},r_{2}e^{i\theta _{2}},\dots ,r_{n}e^{i\theta _{n}})}. The matrix we seek is simply given byD=diag(eiθ1/2,eiθ2/2,,eiθn/2){\displaystyle D=\operatorname {diag} (e^{-i\theta _{1}/2},e^{-i\theta _{2}/2},\dots ,e^{-i\theta _{n}/2})}. ClearlyDUAUTD=diag(r1,r2,,rn){\displaystyle DUAU^{\mathrm {T} }D=\operatorname {diag} (r_{1},r_{2},\dots ,r_{n})} as desired, so we make the modificationU=DU{\displaystyle U'=DU}. Since their squares are the eigenvalues ofAA{\displaystyle A^{\dagger }A}, they coincide with thesingular values ofA{\displaystyle A}. (Note, about the eigen-decomposition of a complex symmetric matrixA{\displaystyle A}, the Jordan normal form ofA{\displaystyle A} may not be diagonal, thereforeA{\displaystyle A} may not be diagonalized by any similarity transformation.)

Decomposition

[edit]

Using theJordan normal form, one can prove that every square real matrix can be written as a product of two real symmetric matrices, and every square complex matrix can be written as a product of two complex symmetric matrices.[5]

Every realnon-singular matrix can be uniquely factored as the product of anorthogonal matrix and a symmetricpositive definite matrix, which is called apolar decomposition. Singular matrices can also be factored, but not uniquely.

Cholesky decomposition states that every real positive-definite symmetric matrixA{\displaystyle A} is a product of a lower-triangular matrixL{\displaystyle L} and its transpose,A=LLT.{\displaystyle A=LL^{\textsf {T}}.}

If the matrix is symmetric indefinite, it may be still decomposed asPAPT=LDLT{\displaystyle PAP^{\textsf {T}}=LDL^{\textsf {T}}} whereP{\displaystyle P} is a permutation matrix (arising from the need topivot),L{\displaystyle L} a lower unit triangular matrix, andD{\displaystyle D} is a direct sum of symmetric1×1{\displaystyle 1\times 1} and2×2{\displaystyle 2\times 2} blocks, which is called Bunch–Kaufman decomposition[6]

A general (complex) symmetric matrix may bedefective and thus not bediagonalizable. IfA{\displaystyle A} is diagonalizable it may be decomposed asA=QΛQT{\displaystyle A=Q\Lambda Q^{\textsf {T}}}whereQ{\displaystyle Q} is an orthogonal matrixQQT=I{\displaystyle QQ^{\textsf {T}}=I}, andΛ{\displaystyle \Lambda } is a diagonal matrix of the eigenvalues ofA{\displaystyle A}. In the special case thatA{\displaystyle A} is real symmetric, thenQ{\displaystyle Q} andΛ{\displaystyle \Lambda } are also real. To see orthogonality, supposex{\displaystyle \mathbf {x} } andy{\displaystyle \mathbf {y} } are eigenvectors corresponding to distinct eigenvaluesλ1{\displaystyle \lambda _{1}},λ2{\displaystyle \lambda _{2}}. Thenλ1x,y=Ax,y=x,Ay=λ2x,y.{\displaystyle \lambda _{1}\langle \mathbf {x} ,\mathbf {y} \rangle =\langle A\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,A\mathbf {y} \rangle =\lambda _{2}\langle \mathbf {x} ,\mathbf {y} \rangle .}

Sinceλ1{\displaystyle \lambda _{1}} andλ2{\displaystyle \lambda _{2}} are distinct, we havex,y=0{\displaystyle \langle \mathbf {x} ,\mathbf {y} \rangle =0}.

Hessian

[edit]

Symmetricn×n{\displaystyle n\times n} matrices of real functions appear as theHessians of twice differentiable functions ofn{\displaystyle n} real variables (the continuity of the second derivative is not needed, despite common belief to the opposite[7]).

Everyquadratic formq{\displaystyle q} onRn{\displaystyle \mathbb {R} ^{n}} can be uniquely written in the formq(x)=xTAx{\displaystyle q(\mathbf {x} )=\mathbf {x} ^{\textsf {T}}A\mathbf {x} } with a symmetricn×n{\displaystyle n\times n} matrixA{\displaystyle A}. Because of the above spectral theorem, one can then say that every quadratic form, up to the choice of an orthonormal basis ofRn{\displaystyle \mathbb {R} ^{n}}, "looks like"q(x1,,xn)=i=1nλixi2{\displaystyle q\left(x_{1},\ldots ,x_{n}\right)=\sum _{i=1}^{n}\lambda _{i}x_{i}^{2}}with real numbersλi{\displaystyle \lambda _{i}}. This considerably simplifies the study of quadratic forms, as well as the study of the level sets{x:q(x)=1}{\displaystyle \left\{\mathbf {x} :q(\mathbf {x} )=1\right\}} which are generalizations ofconic sections.

This is important partly because the second-order behavior of every smooth multi-variable function is described by the quadratic form belonging to the function's Hessian; this is a consequence ofTaylor's theorem.

Symmetrizable matrix

[edit]

Ann×n{\displaystyle n\times n} matrixA{\displaystyle A} is said to besymmetrizable if there exists an invertiblediagonal matrixD{\displaystyle D} and symmetric matrixS{\displaystyle S} such thatA=DS.{\displaystyle A=DS.}

The transpose of a symmetrizable matrix is symmetrizable, sinceAT=(DS)T=SD=D1(DSD){\displaystyle A^{\mathrm {T} }=(DS)^{\mathrm {T} }=SD=D^{-1}(DSD)} andDSD{\displaystyle DSD} is symmetric. A matrixA=(aij){\displaystyle A=(a_{ij})} is symmetrizable if and only if the following conditions are met:

  1. aij=0{\displaystyle a_{ij}=0} impliesaji=0{\displaystyle a_{ji}=0} for all1ijn.{\displaystyle 1\leq i\leq j\leq n.}
  2. ai1i2ai2i3aiki1=ai2i1ai3i2ai1ik{\displaystyle a_{i_{1}i_{2}}a_{i_{2}i_{3}}\dots a_{i_{k}i_{1}}=a_{i_{2}i_{1}}a_{i_{3}i_{2}}\dots a_{i_{1}i_{k}}} for any finite sequence(i1,i2,,ik).{\displaystyle \left(i_{1},i_{2},\dots ,i_{k}\right).}

See also

[edit]

Other types ofsymmetry or pattern in square matrices have special names; see for example:

See alsosymmetry in mathematics.

Notes

[edit]
  1. ^Jesús Rojo García (1986).Álgebra lineal (in Spanish) (2nd ed.). Editorial AC.ISBN 84-7288-120-2.
  2. ^Bellman, Richard (1997).Introduction to Matrix Analysis (2nd ed.). SIAM.ISBN 08-9871-399-4.
  3. ^Horn & Johnson 2013, pp. 263, 278
  4. ^See:
  5. ^Bosch, A. J. (1986). "The factorization of a square matrix into two symmetric matrices".American Mathematical Monthly.93 (6):462–464.doi:10.2307/2323471.JSTOR 2323471.
  6. ^Golub, G.H.;van Loan, C.F. (1996).Matrix Computations. Johns Hopkins University Press.ISBN 0-8018-5413-X.OCLC 34515797.
  7. ^Dieudonné, Jean A. (1969). "Theorem (8.12.2)".Foundations of Modern Analysis. Academic Press. p. 180.ISBN 0-12-215550-5.OCLC 576465.

References

[edit]
  • Horn, Roger A.; Johnson, Charles R. (2013),Matrix analysis (2nd ed.), Cambridge University Press,ISBN 978-0-521-54823-6

External links

[edit]
Matrix classes
Explicitly constrained entries
Constant
Conditions oneigenvalues or eigenvectors
Satisfying conditions onproducts orinverses
With specific applications
Used instatistics
Used ingraph theory
Used in science and engineering
Related terms
International
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Symmetric_matrix&oldid=1304144310"
Category:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp