Inlinear algebra, aminor of amatrixA is thedeterminant of some smallersquare matrix generated fromA by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrixcofactors, which are useful for computing both the determinant andinverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.
IfA is a square matrix, then theminor of the entry in thei-th row andj-th column (also called the(i,j)minor, or afirst minor[1]) is thedeterminant of thesubmatrix formed by deleting thei-th row andj-th column. This number is often denotedMi,j. The(i,j)cofactor is obtained by multiplying the minor by(−1)i +j.
To illustrate these definitions, consider the following3 × 3 matrix,
To compute the minorM2,3 and the cofactorC2,3, we find the determinant of the above matrix with row 2 and column 3 removed.
So the cofactor of the(2,3) entry is
LetA be anm ×n matrix andk aninteger with0 <k ≤m, andk ≤n. Ak ×kminor ofA, also calledminor determinant of orderk ofA or, ifm =n, the(n −k)thminor determinant ofA (the word "determinant" is often omitted, and the word "degree" is sometimes used instead of "order") is the determinant of ak ×k matrix obtained fromA by deletingm −k rows andn −k columns. Sometimes the term is used to refer to thek ×k matrix obtained fromA as above (by deletingm −k rows andn −k columns), but this matrix should be referred to as a(square) submatrix ofA, leaving the term "minor" to refer to the determinant of this matrix. For a matrixA as above, there are a total of minors of sizek ×k. Theminor of order zero is often defined to be 1. For a square matrix, thezeroth minor is just the determinant of the matrix.[2][3]
Letbe ordered sequences (in natural order, as it is always assumed when talking about minors unless otherwise stated) of indexes. The minor corresponding to these choices of indexes is denoted or or or or or (where the(i) denotes the sequence of indexesI, etc.), depending on the source. Also, there are two types of denotations in use in literature: by the minor associated to ordered sequences of indexesI andJ, some authors[4] mean the determinant of the matrix that is formed as above, by taking the elements of the original matrix from the rows whose indexes are inI and columns whose indexes are inJ, whereas some other authors mean by a minor associated toI andJ the determinant of the matrix formed from the original matrix by deleting the rows inI and columns inJ;[2] which notation is used should always be checked. In this article, we use the inclusive definition of choosing the elements from rows ofI and columns ofJ. The exceptional case is the case of the first minor or the(i,j)-minor described above; in that case, the exclusive meaning is standard everywhere in the literature and is used in this article also.
The complementBijk...,pqr... of a minorMijk...,pqr... of a square matrix,A, is formed by the determinant of the matrixA from which all the rows (ijk...) and columns (pqr...) associated withMijk...,pqr... have been removed. The complement of the first minor of an elementaij is merely that element.[5]
The cofactors feature prominently inLaplace's formula for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given ann ×n matrixA = (aij), the determinant ofA, denoteddet(A), can be written as the sum of the cofactors of any row or column of the matrix multiplied by the entries that generated them. In other words, defining then the cofactor expansion along thej-th column gives:
The cofactor expansion along thei-th row gives:
One can write down the inverse of aninvertible matrix by computing its cofactors by usingCramer's rule, as follows. The matrix formed by all of the cofactors of a square matrixA is called thecofactor matrix (also called thematrix of cofactors or, sometimes,comatrix):
Then the inverse ofA is the transpose of the cofactor matrix times the reciprocal of the determinant ofA:
The transpose of the cofactor matrix is called theadjugate matrix (also called theclassical adjoint) ofA.
The above formula can be generalized as follows: Letbe ordered sequences (in natural order) of indexes (hereA is ann ×n matrix). Then[6]
whereI′,J′ denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary toI,J, so that every index1, ...,n appears exactly once in eitherI orI', but not in both (similarly for theJ andJ') and[A]I,J denotes the determinant of the submatrix ofA formed by choosing the rows of the index setJ and columns of index setJ. Also, A simple proof can be given using wedge product. Indeed,
where are the basis vectors. Acting byA on both sides, one gets
The sign can be worked out to beso the sign is determined by the sums of elements inI andJ.
Given anm ×n matrix withreal entries (or entries from any otherfield) andrankr, then there exists at least one non-zeror ×r minor, while all larger minors are zero.
We will use the following notation for minors: ifA is anm ×n matrix,I is asubset of{1, ...,m} withk elements, andJ is a subset of{1, ...,n} withk elements, then we write[A]I,J for thek ×k minor ofA that corresponds to the rows with index inI and the columns with index inJ.
Both the formula for ordinarymatrix multiplication and theCauchy–Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices.Suppose thatA is anm ×n matrix,B is ann ×p matrix,I is asubset of{1, ...,m} withk elements andJ is a subset of{1, ...,p} withk elements. Thenwhere the sum extends over all subsetsK of{1, ...,n} withk elements. This formula is a straightforward extension of the Cauchy–Binet formula.
A more systematic, algebraic treatment of minors is given inmultilinear algebra, using thewedge product: thek-minors of a matrix are the entries in thek-thexterior power map.
If the columns of a matrix are wedged togetherk at a time, thek ×k minors appear as the components of the resultingk-vectors. For example, the 2 × 2 minors of the matrixare −13 (from the first two rows), −7 (from the first and last row), and 5 (from the last two rows). Now consider the wedge productwhere the two expressions correspond to the two columns of our matrix. Using the properties of the wedge product, namely that it isbilinear andalternating,andantisymmetric,we can simplify this expression towhere the coefficients agree with the minors computed earlier.
In some books, instead ofcofactor the termadjunct is used.[7] Moreover, it is denoted asAij and defined in the same way as cofactor:
Using this notation the inverse matrix is written this way:
Keep in mind thatadjunct is notadjugate oradjoint. In modern terminology, the "adjoint" of a matrix most often refers to the correspondingadjoint operator.