Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Commutation matrix

From Wikipedia, the free encyclopedia
Not to be confused with thecommutatorABBA{\displaystyle AB-BA} of two square matricesA{\displaystyle A} andB{\displaystyle B} in ring theory.

Inmathematics, especially inlinear algebra andmatrix theory, thecommutation matrix is used for transforming thevectorized form of amatrix into the vectorized form of itstranspose. Specifically, the commutation matrixK(m,n) is thenm ×mnpermutation matrix which, for anym ×n matrixA, transforms vec(A) into vec(AT):

K(m,n) vec(A) = vec(AT) .

Here vec(A) is themn × 1column vector obtain by stacking the columns ofA on top of one another:

vec(A)=[A1,1,,Am,1,A1,2,,Am,2,,A1,n,,Am,n]T{\displaystyle \operatorname {vec} (\mathbf {A} )=[\mathbf {A} _{1,1},\ldots ,\mathbf {A} _{m,1},\mathbf {A} _{1,2},\ldots ,\mathbf {A} _{m,2},\ldots ,\mathbf {A} _{1,n},\ldots ,\mathbf {A} _{m,n}]^{\mathrm {T} }}

whereA = [Ai,j]. In other words, vec(A) is the vector obtained by vectorizingA incolumn-major order. Similarly, vec(AT) is the vector obtaining by vectorizingA in row-major order. Thecycles and other properties of this permutation have been heavily studied forin-place matrix transposition algorithms.

In the context ofquantum information theory, the commutation matrix is sometimes referred to as theswap matrix or swap operator[1]

Properties

[edit]
π(i+m(j1))=j+n(i1),i=1,,m,j=1,,n.{\displaystyle \pi (i+m(j-1))=j+n(i-1),\quad i=1,\dots ,m,\quad j=1,\dots ,n.}
  • The determinant ofK(m,n) is(1)14n(n1)m(m1){\displaystyle (-1)^{{\frac {1}{4}}n(n-1)m(m-1)}}.
  • ReplacingA withAT in the definition of the commutation matrix shows thatK(m,n) = (K(n,m))T. Therefore, in the special case ofm =n the commutation matrix is aninvolution andsymmetric.
  • The main use of the commutation matrix, and the source of its name, is to commute theKronecker product: for everym ×n matrixA and everyr ×q matrixB,
K(r,m)(AB)K(n,q)=BA.{\displaystyle \mathbf {K} ^{(r,m)}(\mathbf {A} \otimes \mathbf {B} )\mathbf {K} ^{(n,q)}=\mathbf {B} \otimes \mathbf {A} .}
This property is often used in developing the higher order statistics of Wishart covariance matrices.[2]
  • The case ofn=q=1 for the above equation states that for any column vectorsv,w of sizesm,r respectively,
K(r,m)(vw)=wv.{\displaystyle \mathbf {K} ^{(r,m)}(\mathbf {v} \otimes \mathbf {w} )=\mathbf {w} \otimes \mathbf {v} .}
This property is the reason that this matrix is referred to as the "swap operator" in the context of quantum information theory.
  • Two explicit forms for the commutation matrix are as follows: ifer,j denotes thej-th canonical vector of dimensionr (i.e. the vector with 1 in thej-th coordinate and 0 elsewhere) then
K(r,m)=i=1rj=1m(er,iem,jT)(em,jer,iT)=i=1rj=1m(er,iem,j)(em,jer,i)T.{\displaystyle \mathbf {K} ^{(r,m)}=\sum _{i=1}^{r}\sum _{j=1}^{m}\left(\mathbf {e} _{r,i}{\mathbf {e} _{m,j}}^{\mathrm {T} }\right)\otimes \left(\mathbf {e} _{m,j}{\mathbf {e} _{r,i}}^{\mathrm {T} }\right)=\sum _{i=1}^{r}\sum _{j=1}^{m}\left(\mathbf {e} _{r,i}\otimes \mathbf {e} _{m,j}\right)\left(\mathbf {e} _{m,j}\otimes \mathbf {e} _{r,i}\right)^{\mathrm {T} }.}
  • The commutation matrix may be expressed as the following block matrix:
K(m,n)=[K1,1K1,nKm,1Km,n,],{\displaystyle \mathbf {K} ^{(m,n)}={\begin{bmatrix}\mathbf {K} _{1,1}&\cdots &\mathbf {K} _{1,n}\\\vdots &\ddots &\vdots \\\mathbf {K} _{m,1}&\cdots &\mathbf {K} _{m,n},\end{bmatrix}},}
Where thep,q entry ofn x m block-matrixKi,j is given by
Kij(p,q)={1i=q and j=p,0otherwise.{\displaystyle \mathbf {K} _{ij}(p,q)={\begin{cases}1&i=q{\text{ and }}j=p,\\0&{\text{otherwise}}.\end{cases}}}
For example,
K(3,4)=[100000000000000100000000000000100000000000000100010000000000000010000000000000010000000000000010001000000000000001000000000000001000000000000001].{\displaystyle \mathbf {K} ^{(3,4)}=\left[{\begin{array}{ccc|ccc|ccc|ccc}1&0&0&0&0&0&0&0&0&0&0&0\\0&0&0&1&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&1&0&0&0&0&0\\0&0&0&0&0&0&0&0&0&1&0&0\\\hline 0&1&0&0&0&0&0&0&0&0&0&0\\0&0&0&0&1&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&1&0&0&0&0\\0&0&0&0&0&0&0&0&0&0&1&0\\\hline 0&0&1&0&0&0&0&0&0&0&0&0\\0&0&0&0&0&1&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&1&0&0&0\\0&0&0&0&0&0&0&0&0&0&0&1\end{array}}\right].}

Code

[edit]

For both square and rectangular matrices ofm rows andn columns, the commutation matrix can be generated by the code below.

Python

[edit]
importnumpyasnpdefcomm_mat(m,n):# determine permutation applied by Kw=np.arange(m*n).reshape((m,n),order="F").T.ravel(order="F")# apply this permutation to the rows (i.e. to each column) of identity matrix and return resultreturnnp.eye(m*n)[w,:]

Alternatively, a version without imports:

# Kronecker deltadefdelta(i,j):returnint(i==j)defcomm_mat(m,n):# determine permutation applied by Kv=[m*j+iforiinrange(m)forjinrange(n)]# apply this permutation to the rows (i.e. to each column) of identity matrixI=[[delta(i,j)forjinrange(m*n)]foriinrange(m*n)]return[I[i]foriinv]

MATLAB

[edit]
functionP=com_mat(m, n)% determine permutation applied by KA=reshape(1:m*n,m,n);v=reshape(A',1,[]);% apply this permutation to the rows (i.e. to each column) of identity matrixP=eye(m*n);P=P(v,:);

R

[edit]
# Sparse matrix versioncomm_mat=function(m,n){i=1:(m*n)j=NULLfor(kin1:m){j=c(j,m*0:(n-1)+k)}Matrix::sparseMatrix(i=i,j=j,x=1)}

Example

[edit]

LetA{\displaystyle A} denote the following3×2{\displaystyle 3\times 2} matrix:

A=[142536].{\displaystyle A={\begin{bmatrix}1&4\\2&5\\3&6\\\end{bmatrix}}.}

A{\displaystyle A} has the following column-major and row-major vectorizations (respectively):

vcol=vec(A)=[123456],vrow=vec(AT)=[142536].{\displaystyle \mathbf {v} _{\text{col}}=\operatorname {vec} (A)={\begin{bmatrix}1\\2\\3\\4\\5\\6\\\end{bmatrix}},\quad \mathbf {v} _{\text{row}}=\operatorname {vec} (A^{\mathrm {T} })={\begin{bmatrix}1\\4\\2\\5\\3\\6\\\end{bmatrix}}.}

The associated commutation matrix is

K=K(3,2)=[111111],{\displaystyle K=\mathbf {K} ^{(3,2)}={\begin{bmatrix}1&\cdot &\cdot &\cdot &\cdot &\cdot \\\cdot &\cdot &\cdot &1&\cdot &\cdot \\\cdot &1&\cdot &\cdot &\cdot &\cdot \\\cdot &\cdot &\cdot &\cdot &1&\cdot \\\cdot &\cdot &1&\cdot &\cdot &\cdot \\\cdot &\cdot &\cdot &\cdot &\cdot &1\\\end{bmatrix}},}

(where each{\displaystyle \cdot } denotes a zero). As expected, the following holds:

KTK=KKT=I6{\displaystyle K^{\mathrm {T} }K=KK^{\mathrm {T} }=\mathbf {I} _{6}}
Kvcol=vrow{\displaystyle K\mathbf {v} _{\text{col}}=\mathbf {v} _{\text{row}}}

References

[edit]
  1. ^Watrous, John (2018).The Theory of Quantum Information. Cambridge University Press. p. 94.
  2. ^von Rosen, Dietrich (1988). "Moments for the Inverted Wishart Distribution".Scand. J. Stat.15:97–109.
  • Jan R. Magnus and Heinz Neudecker (1988),Matrix Differential Calculus with Applications in Statistics and Econometrics, Wiley.
Matrix classes
Explicitly constrained entries
Constant
Conditions oneigenvalues or eigenvectors
Satisfying conditions onproducts orinverses
With specific applications
Used instatistics
Used ingraph theory
Used in science and engineering
Related terms
Retrieved from "https://en.wikipedia.org/w/index.php?title=Commutation_matrix&oldid=1271312646"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp