The Hadamard product operates on identically shaped matrices and produces a third matrix of the same dimensions.
Inmathematics, theHadamard product (also known as theelement-wise product,entrywise product[1]: ch. 5 orSchur product)[2] is abinary operation that takes in twomatrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from thematrix product. It is attributed to, and named after, either French mathematicianJacques Hadamard or Russian mathematicianIssai Schur.
For two matricesA andB of the same dimensionm ×n, the Hadamard product (sometimes)[4][5][6][user-generated source] is a matrix of the same dimension as the operands, with elements given by[3]
For matrices of different dimensions (m ×n andp ×q, wherem ≠p orn ≠q), the Hadamard product is undefined.
An example of the Hadamard product for two arbitrary 2 × 3 matrices:
The Hadamard product iscommutative (when working with a commutative ring),associative, anddistributive over addition. That is, ifA,B, andC are matrices of the same size, andk is a scalar:
The identity matrix under Hadamard multiplication of twom ×n matrices is anm ×n matrix where all elements are equal to 1. This is different from theidentity matrix under regular matrix multiplication, where only the elements of the main diagonal are equal to 1. Furthermore, a matrix has an inverse under Hadamard multiplication if and only if all of the elements are invertible, or equivalently over a field, if and only if none of the elements are equal to zero.[7]
For vectorsx andy and corresponding diagonal matricesDx andDy with these vectors as their main diagonals, the following identity holds:[1]: 479 wherex* denotes theconjugate transpose ofx. In particular, using vectors of ones, this shows that the sum of all elements in the Hadamard product is thetrace ofABT where superscript T denotes thematrix transpose, that is,. A related result for squareA andB, is that the row-sums of their Hadamard product are the diagonal elements ofABT:[8] Similarly, Furthermore, a Hadamard matrix–vector product can be expressed as where is the vector formed from the diagonals of matrixM. Taking, this implies that
The Hadamard product of two vectors and is the same as matrix multiplication of the correspondingdiagonal matrix of one vector by the other vector:
The operator transforming a vector to a diagonal matrix may be expressed using the Hadamard product as where is a constant vector with elements, and is theidentity matrix.
The Hadamard product of twopositive-semidefinite matrices is positive-semidefinite.[3][8] This is known as the Schur product theorem,[7] after Russian mathematicianIssai Schur. For two positive-semidefinite matricesA andB, it is also known that thedeterminant of their Hadamard product is greater than or equal to the product of their respective determinants:[8]
Other Hadamard operations are also seen in the mathematical literature,[15] namely theHadamard root andHadamard power (which are in effect the same thing because of fractional indices), defined for a matrix such that:
Most scientific or numericalprogramming languages include the Hadamard product, under various names.
InMATLAB, the Hadamard product is expressed as "dot multiply":a .* b, or the function call:times(a, b).[18] It also has analogous dot operators which include, for example, the operatorsa .^ b anda ./ b.[19] Because of this mechanism, it is possible to reserve* and^ for matrix multiplication and matrix exponentials, respectively.
The programming languageJulia has similar syntax as MATLAB, where Hadamard multiplication is calledbroadcast multiplication and also denoted witha .* b, and other operators are analogously defined element-wise, for example Hadamard powers usea .^ b.[20] But unlike MATLAB, in Julia this "dot" syntax is generalized with a genericbroadcasting operator. which can apply any function element-wise. This includes both binary operators (such as the aforementioned multiplication and exponentiation, as well as any other binary operator such as the Kronecker product), and also unary operators such as! and√. Thus, any function inprefix notationf can be applied asf.(x).[21]
Python does not have built-in array support, leading to inconsistent/conflicting notations. TheNumPy numerical library interpretsa*b ora.multiply(b) as the Hadamard product, and usesa@b ora.matmul(b) for the matrix product. With theSymPy symbolic library, multiplication ofarray objects as eithera*b ora@b will produce the matrix product. The Hadamard product can be obtained with the method calla.multiply_elementwise(b).[22] Some Python packages include support for Hadamard powers using methods likenp.power(a, b), or thePandas methoda.pow(b).
In C++, theEigen library provides acwiseProduct member function for theMatrix class (a.cwiseProduct(b)), while theArmadillo library uses the operator% to make compact expressions (a % b;a * b is a matrix product).
InGAUSS, andHP Prime, the operation is known asarray multiplication.
InFortran,R,APL,J, andWolfram Language (Mathematica), the multiplication operator* or× apply the Hadamard product, whereas the matrix product is written usingmatmul,%*%,+.×,+/ .* and., respectively.
In theMaple programming language, the multiplication operator* is not defined for Matrices and Vectors. The matrix product is written using., while the Hadamard product may be obtained using*~. (The~ is a 'broadcast operator' similar to. in Julia.)
The R packagematrixcalc introduces the functionhadamard.prod() for Hadamard Product of numeric matrices or vectors.[23]
The Hadamard product appears inlossy compression algorithms such asJPEG. The decoding step involves an entry-for-entry product, in other words the Hadamard product.[citation needed]
Inimage processing, the Hadamard operator can be used for enhancing, suppressing or masking image regions. One matrix represents the original image, the other acts as weight or masking matrix.
It is used in themachine learning literature, for example, to describe the architecture of recurrent neural networks asGRUs orLSTMs.[24]
It is also used to study the statistical properties of random vectors and matrices.[25][26]
According to the definition ofV. Slyusar the penetrating face product of thep×g matrix andn-dimensional matrix (n > 1) withp×g blocks () is a matrix of size of the form:[27]
^Liu, Shuangzhe; Trenkler, Götz (2008). "Hadamard, Khatri-Rao, Kronecker and other matrix products".International Journal of Information and Systems Sciences.4 (1):160–177.
^"Matrix multiplication".An Introduction to R. The R Project for Statistical Computing. 16 May 2013. Retrieved24 August 2013.
^Sak, Haşim; Senior, Andrew; Beaufays, Françoise (2014-02-05). "Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition".arXiv:1402.1128 [cs.NE].
^Neudecker, Heinz; Liu, Shuangzhe; Polasek, Wolfgang (1995). "The Hadamard product and some of its applications in statistics".Statistics.26 (4):365–373.doi:10.1080/02331889508802503.
^Neudecker, Heinz; Liu, Shuangzhe (2001). "Some statistical properties of Hadamard products of random matrices".Statistical Papers.42 (4):475–487.doi:10.1007/s003620100074.S2CID121385730.
^Ha D., Dai A.M., Le Q.V. (2017). "HyperNetworks".The International Conference on Learning Representations (ICLR) 2017. – Toulon, 2017.: Page 6.arXiv:1609.09106.{{cite journal}}: CS1 maint: multiple names: authors list (link)