Inmathematics, especially inlinear algebra andmatrix theory, thevectorization of amatrix is alinear transformation which converts the matrix into avector. Specifically, the vectorization of am ×n matrixA, denoted vec(A), is themn × 1 column vector obtained by stacking the columns of the matrixA on top of one another:Here, represents the element in thei-th row andj-th column ofA, and the superscript denotes thetranspose. Vectorization expresses, through coordinates, theisomorphism between these (i.e., of matrices and vectors) as vector spaces.
For example, for the 2×2 matrix, the vectorization is.
The connection between the vectorization ofA and the vectorization of its transpose is given by thecommutation matrix.
The vectorization is frequently used together with theKronecker product to expressmatrix multiplication as a linear transformation on matrices. In particular,for matricesA,B, andC of dimensionsk×l,l×m, andm×n.[note 1] For example, if (theadjoint endomorphism of theLie algebragl(n,C) of alln×n matrices withcomplex entries), then, where is then×nidentity matrix.
There are two other useful formulations:
IfB is adiagonal matrix (i.e.,), the vectorization can be written using the column-wise Kronecker product (seeKhatri-Rao product) and themain diagonal ofB:
More generally, it has been shown that vectorization is aself-adjunction in the monoidal closed structure of any category of matrices.[1]
Vectorization is analgebra homomorphism from the space ofn ×n matrices with theHadamard (entrywise) product toCn2 with its Hadamard product:
Vectorization is aunitary transformation from the space ofn×n matrices with theFrobenius (orHilbert–Schmidt)inner product toCn2:where the superscript† denotes theconjugate transpose.
The matrix vectorization operation can be written in terms of a linear sum. LetX be anm ×n matrix that we want to vectorize, and letei be thei-th canonical basis vector for then-dimensional space, that is. LetBi be a(mn) ×m block matrix defined as follows:
Bi consists ofn block matrices of sizem ×m, stacked column-wise, and all these matrices are all-zero except for thei-th one, which is am ×m identity matrixIm.
Then the vectorized version ofX can be expressed as follows:
Multiplication ofX byei extracts thei-th column, while multiplication byBi puts it into the desired position in the final vector.
Alternatively, the linear sum can be expressed using theKronecker product:
For asymmetric matrixA, the vector vec(A) contains more information than is strictly necessary, since the matrix is completely determined by the symmetry together with thelower triangular portion, that is, then(n + 1)/2 entries on and below themain diagonal. For such matrices, thehalf-vectorization is sometimes more useful than the vectorization. The half-vectorization, vech(A), of a symmetricn ×n matrixA is then(n + 1)/2 × 1 column vector obtained by vectorizing only the lower triangular part ofA:
For example, for the 2×2 matrix, the half-vectorization is.
There exist unique matrices transforming the half-vectorization of a matrix to its vectorization and vice versa called, respectively, theduplication matrix and theelimination matrix.
Programming languages that implement matrices may have easy means for vectorization.InMatlab/GNU Octave a matrixA can be vectorized byA(:).GNU Octave also allows vectorization and half-vectorization withvec(A) andvech(A) respectively.Julia has thevec(A) function as well.InPythonNumPy arrays implement theflatten method,[note 1] while inR the desired effect can be achieved via thec() oras.vector() functions or, more efficiently, by removing the dimensions attribute of a matrixA withdim(A) <- NULL. InR, functionvec() of package 'ks' allows vectorization and functionvech() implemented in both packages 'ks' and 'sn' allows half-vectorization.[2][3][4]
Vectorization is used inmatrix calculus and its applications in establishing e.g., moments of random vectors and matrices, asymptotics, as well as Jacobian and Hessian matrices.[5]It is also used in local sensitivity and statistical diagnostics.[6]