Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Vectorization (mathematics)

From Wikipedia, the free encyclopedia
Conversion of a matrix or a tensor to a vector
For other uses, seeVectorization.

Inmathematics, especially inlinear algebra andmatrix theory, thevectorization of amatrix is alinear transformation which converts the matrix into avector. Specifically, the vectorization of am ×n matrixA, denoted vec(A), is themn × 1 column vector obtained by stacking the columns of the matrixA on top of one another:vec(A)=[a1,1,,am,1,a1,2,,am,2,,a1,n,,am,n]T{\displaystyle \operatorname {vec} (A)=[a_{1,1},\ldots ,a_{m,1},a_{1,2},\ldots ,a_{m,2},\ldots ,a_{1,n},\ldots ,a_{m,n}]^{\mathrm {T} }}Here,ai,j{\displaystyle a_{i,j}} represents the element in thei-th row andj-th column ofA, and the superscriptT{\displaystyle {}^{\mathrm {T} }} denotes thetranspose. Vectorization expresses, through coordinates, theisomorphismRm×n:=RmRnRmn{\displaystyle \mathbf {R} ^{m\times n}:=\mathbf {R} ^{m}\otimes \mathbf {R} ^{n}\cong \mathbf {R} ^{mn}} between these (i.e., of matrices and vectors) as vector spaces.

For example, for the 2×2 matrixA=[abcd]{\displaystyle A={\begin{bmatrix}a&b\\c&d\end{bmatrix}}}, the vectorization isvec(A)=[acbd]{\displaystyle \operatorname {vec} (A)={\begin{bmatrix}a\\c\\b\\d\end{bmatrix}}}.

The connection between the vectorization ofA and the vectorization of its transpose is given by thecommutation matrix.

Compatibility with Kronecker products

[edit]

The vectorization is frequently used together with theKronecker product to expressmatrix multiplication as a linear transformation on matrices. In particular,vec(ABC)=(CTA)vec(B){\displaystyle \operatorname {vec} (ABC)=(C^{\mathrm {T} }\otimes A)\operatorname {vec} (B)}for matricesA,B, andC of dimensionsk×l,l×m, andm×n.[note 1] For example, ifadA(X)=AXXA{\displaystyle \operatorname {ad} _{A}(X)=AX-XA} (theadjoint endomorphism of theLie algebragl(n,C) of alln×n matrices withcomplex entries), thenvec(adA(X))=(AInInAT)vec(X){\displaystyle \operatorname {vec} (\operatorname {ad} _{A}(X))=(A\otimes I_{n}-I_{n}\otimes A^{\mathrm {T} }){\text{vec}}(X)}, whereIn{\displaystyle I_{n}} is then×nidentity matrix.

There are two other useful formulations:vec(ABC)=(InAB)vec(C)=(CTBTIk)vec(A)vec(AB)=(ImA)vec(B)=(BTIk)vec(A){\displaystyle {\begin{aligned}\operatorname {vec} (ABC)&=(I_{n}\otimes AB)\operatorname {vec} (C)=(C^{\mathrm {T} }B^{\mathrm {T} }\otimes I_{k})\operatorname {vec} (A)\\\operatorname {vec} (AB)&=(I_{m}\otimes A)\operatorname {vec} (B)=(B^{\mathrm {T} }\otimes I_{k})\operatorname {vec} (A)\end{aligned}}}

IfB is adiagonal matrix (i.e.,B=diag(b1,,bn){\textstyle B=\operatorname {diag} (b_{1},\dots ,b_{n})}), the vectorization can be written using the column-wise Kronecker product{\textstyle \ast } (seeKhatri-Rao product) and themain diagonalb=[b1,,bn]T{\textstyle b={\begin{bmatrix}b_{1},\dots ,b_{n}\end{bmatrix}}^{\mathrm {T} }} ofB:vec(ABC)=(CTA)b{\displaystyle \operatorname {vec} (ABC)=(C^{\mathrm {T} }\ast A)b}

More generally, it has been shown that vectorization is aself-adjunction in the monoidal closed structure of any category of matrices.[1]

Compatibility with Hadamard products

[edit]

Vectorization is analgebra homomorphism from the space ofn ×n matrices with theHadamard (entrywise) product toCn2 with its Hadamard product:vec(AB)=vec(A)vec(B).{\displaystyle \operatorname {vec} (A\circ B)=\operatorname {vec} (A)\circ \operatorname {vec} (B).}

Compatibility with inner products

[edit]

Vectorization is aunitary transformation from the space ofn×n matrices with theFrobenius (orHilbert–Schmidt)inner product toCn2:tr(AB)=vec(A)vec(B),{\displaystyle \operatorname {tr} (A^{\dagger }B)=\operatorname {vec} (A)^{\dagger }\operatorname {vec} (B),}where the superscript denotes theconjugate transpose.

Vectorization as a linear sum

[edit]

The matrix vectorization operation can be written in terms of a linear sum. LetX be anm ×n matrix that we want to vectorize, and letei be thei-th canonical basis vector for then-dimensional space, that isei=[0,,0,1,0,,0]T{\textstyle \mathbf {e} _{i}=\left[0,\dots ,0,1,0,\dots ,0\right]^{\mathrm {T} }}. LetBi be a(mn) ×m block matrix defined as follows:Bi=[00Im00]=eiIm{\displaystyle \mathbf {B} _{i}={\begin{bmatrix}\mathbf {0} \\\vdots \\\mathbf {0} \\\mathbf {I} _{m}\\\mathbf {0} \\\vdots \\\mathbf {0} \end{bmatrix}}=\mathbf {e} _{i}\otimes \mathbf {I} _{m}}

Bi consists ofn block matrices of sizem ×m, stacked column-wise, and all these matrices are all-zero except for thei-th one, which is am ×m identity matrixIm.

Then the vectorized version ofX can be expressed as follows:vec(X)=i=1nBiXei{\displaystyle \operatorname {vec} (\mathbf {X} )=\sum _{i=1}^{n}\mathbf {B} _{i}\mathbf {X} \mathbf {e} _{i}}

Multiplication ofX byei extracts thei-th column, while multiplication byBi puts it into the desired position in the final vector.

Alternatively, the linear sum can be expressed using theKronecker product:vec(X)=i=1neiXei{\displaystyle \operatorname {vec} (\mathbf {X} )=\sum _{i=1}^{n}\mathbf {e} _{i}\otimes \mathbf {X} \mathbf {e} _{i}}

Half-vectorization

[edit]

For asymmetric matrixA, the vector vec(A) contains more information than is strictly necessary, since the matrix is completely determined by the symmetry together with thelower triangular portion, that is, then(n + 1)/2 entries on and below themain diagonal. For such matrices, thehalf-vectorization is sometimes more useful than the vectorization. The half-vectorization, vech(A), of a symmetricn ×n matrixA is then(n + 1)/2 × 1 column vector obtained by vectorizing only the lower triangular part ofA:vech(A)=[A1,1,,An,1,A2,2,,An,2,,An1,n1,An,n1,An,n]T.{\displaystyle \operatorname {vech} (A)=[A_{1,1},\ldots ,A_{n,1},A_{2,2},\ldots ,A_{n,2},\ldots ,A_{n-1,n-1},A_{n,n-1},A_{n,n}]^{\mathrm {T} }.}

For example, for the 2×2 matrixA=[abbd]{\displaystyle A={\begin{bmatrix}a&b\\b&d\end{bmatrix}}}, the half-vectorization isvech(A)=[abd]{\displaystyle \operatorname {vech} (A)={\begin{bmatrix}a\\b\\d\end{bmatrix}}}.

There exist unique matrices transforming the half-vectorization of a matrix to its vectorization and vice versa called, respectively, theduplication matrix and theelimination matrix.

Programming language

[edit]

Programming languages that implement matrices may have easy means for vectorization.InMatlab/GNU Octave a matrixA can be vectorized byA(:).GNU Octave also allows vectorization and half-vectorization withvec(A) andvech(A) respectively.Julia has thevec(A) function as well.InPythonNumPy arrays implement theflatten method,[note 1] while inR the desired effect can be achieved via thec() oras.vector() functions or, more efficiently, by removing the dimensions attribute of a matrixA withdim(A) <- NULL. InR, functionvec() of package 'ks' allows vectorization and functionvech() implemented in both packages 'ks' and 'sn' allows half-vectorization.[2][3][4]

Applications

[edit]

Vectorization is used inmatrix calculus and its applications in establishing e.g., moments of random vectors and matrices, asymptotics, as well as Jacobian and Hessian matrices.[5]It is also used in local sensitivity and statistical diagnostics.[6]

Notes

[edit]
  1. ^abThe identity for row-major vectorization isvec(ABC)=(ACT)vec(B){\displaystyle \operatorname {vec} (ABC)=(A\otimes C^{\mathrm {T} })\operatorname {vec} (B)}.

See also

[edit]

References

[edit]
  1. ^Macedo, H. D.; Oliveira, J. N. (2013). "Typing Linear Algebra: A Biproduct-oriented Approach".Science of Computer Programming.78 (11):2160–2191.arXiv:1312.4818.doi:10.1016/j.scico.2012.07.012.S2CID 9846072.
  2. ^Duong, Tarn (2018)."ks: Kernel Smoothing".R package version 1.11.0.
  3. ^Azzalini, Adelchi (2017)."The R package 'sn': The Skew-Normal and Related Distributions such as the Skew-t".R package version 1.5.1.
  4. ^Vinod, Hrishikesh D. (2011)."Simultaneous Reduction and Vec Stacking".Hands-on Matrix Algebra Using R: Active and Motivated Learning with Applications. Singapore: World Scientific. pp. 233–248.ISBN 978-981-4313-69-8 – viaGoogle Books.
  5. ^Magnus, Jan; Neudecker, Heinz (2019).Matrix differential calculus with applications in statistics and econometrics. New York: John Wiley.ISBN 978-1-119-54120-2.
  6. ^Liu, Shuangzhe; Leiva, Victor; Zhuang, Dan; Ma, Tiefeng; Figueroa-Zúñiga, Jorge I. (March 2022)."Matrix differential calculus with applications in the multivariate linear model and its diagnostics".Journal of Multivariate Analysis.188 104849.doi:10.1016/j.jmva.2021.104849.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Vectorization_(mathematics)&oldid=1314728186"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp