Inlinear algebra, twomatrices and are said tocommute if, or equivalently if theircommutator is zero. Matrices that commute with matrix are called thecommutant of matrix (and vice versa).[1]
Aset of matrices is said tocommute if they commute pairwise, meaning that every pair of matrices in the set commutes.
Commuting matrices preserve each other'seigenspaces.[2] As a consequence, commuting matrices over analgebraically closed field aresimultaneously triangularizable; that is, there arebases over which they are bothupper triangular. In other words, if commute, there exists a similarity matrix such that is upper triangular for all. Theconverse is not necessarily true, as the following counterexample shows:
However, if the square of the commutator of two matrices is zero, that is,, then the converse is true.[3]
Two diagonalizable matrices and commute () if they aresimultaneously diagonalizable (that is, there exists an invertible matrix such that both and arediagonal).[4]: p. 64 The converse is also true; that is, if two diagonalizable matrices commute, they are simultaneously diagonalizable.[5] But if you take any two matrices that commute (and do not assume they are two diagonalizable matrices) they are simultaneously diagonalizable already if one of the matrices has no multiple eigenvalues.[6]
If and commute, they have a common eigenvector. If has distinct eigenvalues, and and commute, then's eigenvectors are's eigenvectors.
If one of the matrices has the property that its minimal polynomial coincides with itscharacteristic polynomial (that is, it has the maximal degree), which happens in particular whenever the characteristic polynomial has onlysimple roots, then the other matrix can be written as a polynomial in the first.
As a direct consequence of simultaneous triangulizability, theeigenvalues of two commutingcomplex matricesA,B with theiralgebraic multiplicities (themultisets of roots of their characteristic polynomials) can be matched up as in such a way that the multiset of eigenvalues of any polynomial in the two matrices is the multiset of the values. This theorem is due toFrobenius.[7]
TwoHermitian matrices commute if theireigenspaces coincide. In particular, two Hermitian matrices without multiple eigenvalues commute if they share the same set of eigenvectors. This follows by considering the eigenvalue decompositions of both matrices. Let and be two Hermitian matrices. and have common eigenspaces when they can be written as and. It then follows that
The property of two matrices commuting is nottransitive: A matrix may commute with both and, and still and do not commute with each other. As an example, theidentity matrix commutes with all matrices, which between them do not all commute. If the set of matrices considered is restricted to Hermitian matrices without multiple eigenvalues, then commutativity is transitive, as a consequence of the characterization in terms of eigenvectors.
Ann × n matrix commutes with every othern × n matrix if and only if it is a scalar matrix, that is, a matrix of the form, where is then × n identity matrix and is a scalar. In other words, thecenter of thegroup ofn × n matrices under multiplication is thesubgroup of scalar matrices.
Fix a finite field, let denote the number of ordered pairs of commuting matrices over,W. Feit and N. J. Fine[8] showed the equation
Jordan blocks commute with upper triangular matrices that have the same value along bands.
If the product of twosymmetric matrices is symmetric, then they must commute. That also means that every diagonal matrix commutes with all other diagonal matrices.[9][10]
The notion of commuting matrices was introduced byCayley in his memoir on the theory of matrices, which also provided the first axiomatization of matrices. The first significant results on commuting matrices were proved byFrobenius in 1878.[11]
^Horn, Roger A.; Johnson, Charles R. (2012).Matrix Analysis. Cambridge University Press. p. 70.ISBN9780521839402.
^Horn, Roger A.; Johnson, Charles R. (2012).Matrix Analysis. Cambridge University Press. p. 127.ISBN9780521839402.
^Horn, Roger A.; Johnson, Charles R. (2013).Matrix Analysis, second edition. Cambridge University Press.ISBN9780521839402.
^Without loss of generality, one may suppose that the first matrix is diagonal. In this case, commutativity implies that if an entry of the second matrix is nonzero, then After a permutation of rows and columns, the two matrices become simultaneouslyblock diagonal. In each block, the first matrix is the product of an identity matrix, and the second one is a diagonalizable matrix. So, diagonalizing the blocks of the second matrix does change the first matrix, and allows a simultaneous diagonalization.
^Drazin, M. (1951), "Some Generalizations of Matrix Commutativity",Proceedings of the London Mathematical Society, 3,1 (1):222–231,doi:10.1112/plms/s3-1.1.222