Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Minor (linear algebra)

From Wikipedia, the free encyclopedia
(Redirected fromCofactor matrix)
Determinant of a subsection of a square matrix
This article is about a concept in linear algebra. For the concept of "minor" in graph theory, seeGraph minor.

Inlinear algebra, aminor of amatrixA is thedeterminant of some smallersquare matrix generated fromA by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrixcofactors, which are useful for computing both the determinant andinverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

Definition and illustration

[edit]

First minors

[edit]

IfA is a square matrix, then theminor of the entry in thei-th row andj-th column (also called the(i,j)minor, or afirst minor[1]) is thedeterminant of thesubmatrix formed by deleting thei-th row andj-th column. This number is often denotedMi,j. The(i,j)cofactor is obtained by multiplying the minor by(−1)i +j.

To illustrate these definitions, consider the following3 × 3 matrix,

[1473051911]{\displaystyle {\begin{bmatrix}1&4&7\\3&0&5\\-1&9&11\\\end{bmatrix}}}

To compute the minorM2,3 and the cofactorC2,3, we find the determinant of the above matrix with row 2 and column 3 removed.

M2,3=det[1419]=det[1419]=9(4)=13{\displaystyle M_{2,3}=\det {\begin{bmatrix}1&4&\Box \\\Box &\Box &\Box \\-1&9&\Box \\\end{bmatrix}}=\det {\begin{bmatrix}1&4\\-1&9\\\end{bmatrix}}=9-(-4)=13}

So the cofactor of the(2,3) entry is

C2,3=(1)2+3(M2,3)=13.{\displaystyle C_{2,3}=(-1)^{2+3}(M_{2,3})=-13.}

General definition

[edit]

LetA be anm ×n matrix andk aninteger with0 <km, andkn. Ak ×kminor ofA, also calledminor determinant of orderk ofA or, ifm =n, the(nk)thminor determinant ofA (the word "determinant" is often omitted, and the word "degree" is sometimes used instead of "order") is the determinant of ak ×k matrix obtained fromA by deletingmk rows andnk columns. Sometimes the term is used to refer to thek ×k matrix obtained fromA as above (by deletingmk rows andnk columns), but this matrix should be referred to as a(square) submatrix ofA, leaving the term "minor" to refer to the determinant of this matrix. For a matrixA as above, there are a total of(mk)(nk){\textstyle {m \choose k}\cdot {n \choose k}} minors of sizek ×k. Theminor of order zero is often defined to be 1. For a square matrix, thezeroth minor is just the determinant of the matrix.[2][3]

LetI=1i1<i2<<ikm,J=1j1<j2<<jkn,{\displaystyle {\begin{aligned}I&=1\leq i_{1}<i_{2}<\cdots <i_{k}\leq m,\\[2pt]J&=1\leq j_{1}<j_{2}<\cdots <j_{k}\leq n,\end{aligned}}}be ordered sequences (in natural order, as it is always assumed when talking about minors unless otherwise stated) of indexes. The minordet((Aip,jq)p,q=1,,k){\textstyle \det {\bigl (}(\mathbf {A} _{i_{p},j_{q}})_{p,q=1,\ldots ,k}{\bigr )}} corresponding to these choices of indexes is denoteddetI,JA{\displaystyle \det _{I,J}A} ordetAI,J{\displaystyle \det \mathbf {A} _{I,J}} or[A]I,J{\displaystyle [\mathbf {A} ]_{I,J}} orMI,J{\displaystyle M_{I,J}} orMi1,i2,,ik,j1,j2,,jk{\displaystyle M_{i_{1},i_{2},\ldots ,i_{k},j_{1},j_{2},\ldots ,j_{k}}} orM(i),(j){\displaystyle M_{(i),(j)}} (where the(i) denotes the sequence of indexesI, etc.), depending on the source. Also, there are two types of denotations in use in literature: by the minor associated to ordered sequences of indexesI andJ, some authors[4] mean the determinant of the matrix that is formed as above, by taking the elements of the original matrix from the rows whose indexes are inI and columns whose indexes are inJ, whereas some other authors mean by a minor associated toI andJ the determinant of the matrix formed from the original matrix by deleting the rows inI and columns inJ;[2] which notation is used should always be checked. In this article, we use the inclusive definition of choosing the elements from rows ofI and columns ofJ. The exceptional case is the case of the first minor or the(i,j)-minor described above; in that case, the exclusive meaningMi,j=det((Ap,q)pi,qj){\textstyle M_{i,j}=\det {\bigl (}\left(\mathbf {A} _{p,q}\right)_{p\neq i,q\neq j}{\bigr )}} is standard everywhere in the literature and is used in this article also.

Complement

[edit]

The complementBijk...,pqr... of a minorMijk...,pqr... of a square matrix,A, is formed by the determinant of the matrixA from which all the rows (ijk...) and columns (pqr...) associated withMijk...,pqr... have been removed. The complement of the first minor of an elementaij is merely that element.[5]

Applications of minors and cofactors

[edit]

Cofactor expansion of the determinant

[edit]
Main article:Laplace expansion

The cofactors feature prominently inLaplace's formula for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given ann ×n matrixA = (aij), the determinant ofA, denoteddet(A), can be written as the sum of the cofactors of any row or column of the matrix multiplied by the entries that generated them. In other words, definingCij=(1)i+jMij{\displaystyle C_{ij}=(-1)^{i+j}M_{ij}} then the cofactor expansion along thej-th column gives:

det(A)=a1jC1j+a2jC2j+a3jC3j++anjCnj=i=1naijCij=i=1naij(1)i+jMij{\displaystyle {\begin{aligned}\det(\mathbf {A} )&=a_{1j}C_{1j}+a_{2j}C_{2j}+a_{3j}C_{3j}+\cdots +a_{nj}C_{nj}\\[2pt]&=\sum _{i=1}^{n}a_{ij}C_{ij}\\[2pt]&=\sum _{i=1}^{n}a_{ij}(-1)^{i+j}M_{ij}\end{aligned}}}

The cofactor expansion along thei-th row gives:

det(A)=ai1Ci1+ai2Ci2+ai3Ci3++ainCin=j=1naijCij=j=1naij(1)i+jMij{\displaystyle {\begin{aligned}\det(\mathbf {A} )&=a_{i1}C_{i1}+a_{i2}C_{i2}+a_{i3}C_{i3}+\cdots +a_{in}C_{in}\\[2pt]&=\sum _{j=1}^{n}a_{ij}C_{ij}\\[2pt]&=\sum _{j=1}^{n}a_{ij}(-1)^{i+j}M_{ij}\end{aligned}}}

Inverse of a matrix

[edit]
Main article:Invertible matrix

One can write down the inverse of aninvertible matrix by computing its cofactors by usingCramer's rule, as follows. The matrix formed by all of the cofactors of a square matrixA is called thecofactor matrix (also called thematrix of cofactors or, sometimes,comatrix):

C=[C11C12C1nC21C22C2nCn1Cn2Cnn]{\displaystyle \mathbf {C} ={\begin{bmatrix}C_{11}&C_{12}&\cdots &C_{1n}\\C_{21}&C_{22}&\cdots &C_{2n}\\\vdots &\vdots &\ddots &\vdots \\C_{n1}&C_{n2}&\cdots &C_{nn}\end{bmatrix}}}

Then the inverse ofA is the transpose of the cofactor matrix times the reciprocal of the determinant ofA:

A1=1det(A)CT.{\displaystyle \mathbf {A} ^{-1}={\frac {1}{\operatorname {det} (\mathbf {A} )}}\mathbf {C} ^{\mathsf {T}}.}

The transpose of the cofactor matrix is called theadjugate matrix (also called theclassical adjoint) ofA.

The above formula can be generalized as follows: LetI=1i1<i2<<ikn,J=1j1<j2<<jkn,{\displaystyle {\begin{aligned}I&=1\leq i_{1}<i_{2}<\ldots <i_{k}\leq n,\\[2pt]J&=1\leq j_{1}<j_{2}<\ldots <j_{k}\leq n,\end{aligned}}}be ordered sequences (in natural order) of indexes (hereA is ann ×n matrix). Then[6]

[A1]I,J=±[A]J,IdetA,{\displaystyle [\mathbf {A} ^{-1}]_{I,J}=\pm {\frac {[\mathbf {A} ]_{J',I'}}{\det \mathbf {A} }},}

whereI′,J′ denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary toI,J, so that every index1, ...,n appears exactly once in eitherI orI', but not in both (similarly for theJ andJ') and[A]I,J denotes the determinant of the submatrix ofA formed by choosing the rows of the index setJ and columns of index setJ. Also,[A]I,J=det((Aip,jq)p,q=1,,k).{\displaystyle [\mathbf {A} ]_{I,J}=\det {\bigl (}(A_{i_{p},j_{q}})_{p,q=1,\ldots ,k}{\bigr )}.} A simple proof can be given using wedge product. Indeed,

[A1]I,J(e1en)=±(A1ej1)(A1ejk)ei1eink,{\displaystyle {\bigl [}\mathbf {A} ^{-1}{\bigr ]}_{I,J}(e_{1}\wedge \ldots \wedge e_{n})=\pm (\mathbf {A} ^{-1}e_{j_{1}})\wedge \ldots \wedge (\mathbf {A} ^{-1}e_{j_{k}})\wedge e_{i'_{1}}\wedge \ldots \wedge e_{i'_{n-k}},}

wheree1,,en{\displaystyle e_{1},\ldots ,e_{n}} are the basis vectors. Acting byA on both sides, one gets

 [A1]I,JdetA(e1en)= ±(ej1)(ejk)(Aei1)(Aeink)= ±[A]J,I(e1en).{\displaystyle {\begin{aligned}&\ {\bigl [}\mathbf {A} ^{-1}{\bigr ]}_{I,J}\det \mathbf {A} (e_{1}\wedge \ldots \wedge e_{n})\\[2pt]=&\ \pm (e_{j_{1}})\wedge \ldots \wedge (e_{j_{k}})\wedge (\mathbf {A} e_{i'_{1}})\wedge \ldots \wedge (\mathbf {A} e_{i'_{n-k}})\\[2pt]=&\ \pm [\mathbf {A} ]_{J',I'}(e_{1}\wedge \ldots \wedge e_{n}).\end{aligned}}}

The sign can be worked out to be(1)(s=1kiss=1kjs),{\displaystyle (-1)^{\wedge }\!\!\left(\sum _{s=1}^{k}i_{s}-\sum _{s=1}^{k}j_{s}\right),}so the sign is determined by the sums of elements inI andJ.

Other applications

[edit]

Given anm ×n matrix withreal entries (or entries from any otherfield) andrankr, then there exists at least one non-zeror ×r minor, while all larger minors are zero.

We will use the following notation for minors: ifA is anm ×n matrix,I is asubset of{1, ...,m} withk elements, andJ is a subset of{1, ...,n} withk elements, then we write[A]I,J for thek ×k minor ofA that corresponds to the rows with index inI and the columns with index inJ.

  • IfI =J, then[A]I,J is called aprincipal minor.
  • If the matrix that corresponds to a principal minor is a square upper-leftsubmatrix of the larger matrix (i.e., it consists of matrix elements in rows and columns from 1 tok, also known as a leading principal submatrix), then the principal minor is called aleading principal minor (of orderk) orcorner (principal) minor (of orderk).[3] For ann ×n square matrix, there aren leading principal minors.
  • Abasic minor of a matrix is the determinant of a square submatrix that is of maximal size with nonzero determinant.[3]
  • ForHermitian matrices, the leading principal minors can be used to test forpositive definiteness and the principal minors can be used to test forpositive semidefiniteness. SeeSylvester's criterion for more details.

Both the formula for ordinarymatrix multiplication and theCauchy–Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices.Suppose thatA is anm ×n matrix,B is ann ×p matrix,I is asubset of{1, ...,m} withk elements andJ is a subset of{1, ...,p} withk elements. Then[AB]I,J=K[A]I,K[B]K,J{\displaystyle [\mathbf {AB} ]_{I,J}=\sum _{K}[\mathbf {A} ]_{I,K}[\mathbf {B} ]_{K,J}\,}where the sum extends over all subsetsK of{1, ...,n} withk elements. This formula is a straightforward extension of the Cauchy–Binet formula.

Multilinear algebra approach

[edit]

A more systematic, algebraic treatment of minors is given inmultilinear algebra, using thewedge product: thek-minors of a matrix are the entries in thek-thexterior power map.

If the columns of a matrix are wedged togetherk at a time, thek ×k minors appear as the components of the resultingk-vectors. For example, the 2 × 2 minors of the matrix(143121){\displaystyle {\begin{pmatrix}1&4\\3&\!\!-1\\2&1\\\end{pmatrix}}}are −13 (from the first two rows), −7 (from the first and last row), and 5 (from the last two rows). Now consider the wedge product(e1+3e2+2e3)(4e1e2+e3){\displaystyle (\mathbf {e} _{1}+3\mathbf {e} _{2}+2\mathbf {e} _{3})\wedge (4\mathbf {e} _{1}-\mathbf {e} _{2}+\mathbf {e} _{3})}where the two expressions correspond to the two columns of our matrix. Using the properties of the wedge product, namely that it isbilinear andalternating,eiei=0,{\displaystyle \mathbf {e} _{i}\wedge \mathbf {e} _{i}=0,}andantisymmetric,eiej=ejei,{\displaystyle \mathbf {e} _{i}\wedge \mathbf {e} _{j}=-\mathbf {e} _{j}\wedge \mathbf {e} _{i},}we can simplify this expression to13e1e27e1e3+5e2e3{\displaystyle -13\mathbf {e} _{1}\wedge \mathbf {e} _{2}-7\mathbf {e} _{1}\wedge \mathbf {e} _{3}+5\mathbf {e} _{2}\wedge \mathbf {e} _{3}}where the coefficients agree with the minors computed earlier.

A remark about different notation

[edit]

In some books, instead ofcofactor the termadjunct is used.[7] Moreover, it is denoted asAij and defined in the same way as cofactor:Aij=(1)i+jMij{\displaystyle \mathbf {A} _{ij}=(-1)^{i+j}\mathbf {M} _{ij}}

Using this notation the inverse matrix is written this way:M1=1det(M)[A11A21An1A12A22An2A1nA2nAnn]{\displaystyle \mathbf {M} ^{-1}={\frac {1}{\det(M)}}{\begin{bmatrix}A_{11}&A_{21}&\cdots &A_{n1}\\A_{12}&A_{22}&\cdots &A_{n2}\\\vdots &\vdots &\ddots &\vdots \\A_{1n}&A_{2n}&\cdots &A_{nn}\end{bmatrix}}}

Keep in mind thatadjunct is notadjugate oradjoint. In modern terminology, the "adjoint" of a matrix most often refers to the correspondingadjoint operator.

See also

[edit]

References

[edit]
  1. ^Burnside, William Snow & Panton, Arthur William (1886)Theory of Equations: with an Introduction to the Theory of Binary Algebraic Form.
  2. ^abElementary Matrix Algebra (Third edition), Franz E. Hohn, The Macmillan Company, 1973,ISBN 978-0-02-355950-1
  3. ^abc"Minor".Encyclopedia of Mathematics.
  4. ^Linear Algebra and Geometry, Igor R. Shafarevich, Alexey O. Remizov, Springer-Verlag Berlin Heidelberg, 2013,ISBN 978-3-642-30993-9
  5. ^Bertha Jeffreys,Methods of Mathematical Physics, p.135, Cambridge University Press, 1999ISBN 0-521-66402-0.
  6. ^Viktor Vasil_evich Prasolov (13 June 1994).Problems and Theorems in Linear Algebra. American Mathematical Soc. pp. 15–.ISBN 978-0-8218-0236-6.
  7. ^Felix Gantmacher,Theory of matrices (1st ed., original language is Russian), Moscow: State Publishing House of technical and theoretical literature, 1953, p.491,

External links

[edit]
Basic concepts
Three dimensional Euclidean space
Matrices
Bilinear
Multilinear algebra
Vector space constructions
Numerical
Retrieved from "https://en.wikipedia.org/w/index.php?title=Minor_(linear_algebra)&oldid=1272817604#Inverse_of_a_matrix"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp