Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Hadamard product (matrices)

From Wikipedia, the free encyclopedia
Elementwise product of two matrices
The Hadamard product operates on identically shaped matrices and produces a third matrix of the same dimensions.

Inmathematics, theHadamard product (also known as theelement-wise product,entrywise product[1]: ch. 5  orSchur product)[2] is abinary operation that takes in twomatrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from thematrix product. It is attributed to, and named after, either French mathematicianJacques Hadamard or Russian mathematicianIssai Schur.

The Hadamard product isassociative anddistributive. Unlike the matrix product, it is alsocommutative.[3]

Definition

[edit]

For two matricesA andB of the same dimensionm ×n, the Hadamard productAB{\displaystyle A\odot B} (sometimesAB{\displaystyle A\circ B})[4][5][6][user-generated source] is a matrix of the same dimension as the operands, with elements given by[3]

(AB)ij=(A)ij(B)ij.{\displaystyle (A\odot B)_{ij}=(A)_{ij}(B)_{ij}.}

For matrices of different dimensions (m ×n andp ×q, wheremp ornq), the Hadamard product is undefined.

An example of the Hadamard product for two arbitrary 2 × 3 matrices:

[231082][314795]=[2×33×11×40×78×92×5]=[63407210].{\displaystyle {\begin{bmatrix}2&3&1\\0&8&-2\end{bmatrix}}\odot {\begin{bmatrix}3&1&4\\7&9&5\end{bmatrix}}={\begin{bmatrix}2\times 3&3\times 1&1\times 4\\0\times 7&8\times 9&-2\times 5\end{bmatrix}}={\begin{bmatrix}6&3&4\\0&72&-10\end{bmatrix}}.}

Properties

[edit]

The mixed-product property

[edit]

The Hadamard product obeys certain relationships with other matrix product operators.

Schur product theorem

[edit]
Main article:Schur product theorem

The Hadamard product of twopositive-semidefinite matrices is positive-semidefinite.[3][8] This is known as the Schur product theorem,[7] after Russian mathematicianIssai Schur. For two positive-semidefinite matricesA andB, it is also known that thedeterminant of their Hadamard product is greater than or equal to the product of their respective determinants:[8]det(AB)det(A)det(B).{\displaystyle \det({A}\odot {B})\geq \det({A})\det({B}).}

Analogous operations

[edit]

Other Hadamard operations are also seen in the mathematical literature,[15] namely theHadamard root andHadamard power (which are in effect the same thing because of fractional indices), defined for a matrix such that:

ForB=A2Bij=Aij2{\displaystyle {\begin{aligned}{B}&={A}^{\circ 2}\\B_{ij}&={A_{ij}}^{2}\end{aligned}}}

and forB=A12Bij=Aij12{\displaystyle {\begin{aligned}{B}&={A}^{\circ {\frac {1}{2}}}\\B_{ij}&={A_{ij}}^{\frac {1}{2}}\end{aligned}}}

TheHadamard inverse reads:[15]B=A1Bij=Aij1{\displaystyle {\begin{aligned}{B}&={A}^{\circ -1}\\B_{ij}&={A_{ij}}^{-1}\end{aligned}}}

AHadamard division is defined as:[16][17]

C=ABCij=AijBij{\displaystyle {\begin{aligned}{C}&={A}\oslash {B}\\C_{ij}&={\frac {A_{ij}}{B_{ij}}}\end{aligned}}}

In programming languages

[edit]

Most scientific or numericalprogramming languages include the Hadamard product, under various names.

InMATLAB, the Hadamard product is expressed as "dot multiply":a .* b, or the function call:times(a, b).[18] It also has analogous dot operators which include, for example, the operatorsa .^ b anda ./ b.[19] Because of this mechanism, it is possible to reserve* and^ for matrix multiplication and matrix exponentials, respectively.

The programming languageJulia has similar syntax as MATLAB, where Hadamard multiplication is calledbroadcast multiplication and also denoted witha .* b, and other operators are analogously defined element-wise, for example Hadamard powers usea .^ b.[20] But unlike MATLAB, in Julia this "dot" syntax is generalized with a genericbroadcasting operator. which can apply any function element-wise. This includes both binary operators (such as the aforementioned multiplication and exponentiation, as well as any other binary operator such as the Kronecker product), and also unary operators such as! and. Thus, any function inprefix notationf can be applied asf.(x).[21]

Python does not have built-in array support, leading to inconsistent/conflicting notations. TheNumPy numerical library interpretsa*b ora.multiply(b) as the Hadamard product, and usesa@b ora.matmul(b) for the matrix product. With theSymPy symbolic library, multiplication ofarray objects as eithera*b ora@b will produce the matrix product. The Hadamard product can be obtained with the method calla.multiply_elementwise(b).[22] Some Python packages include support for Hadamard powers using methods likenp.power(a, b), or thePandas methoda.pow(b).

In C++, theEigen library provides acwiseProduct member function for theMatrix class (a.cwiseProduct(b)), while theArmadillo library uses the operator% to make compact expressions (a % b;a * b is a matrix product).

InGAUSS, andHP Prime, the operation is known asarray multiplication.

InFortran,R,APL,J, andWolfram Language (Mathematica), the multiplication operator* or× apply the Hadamard product, whereas the matrix product is written usingmatmul,%*%,+.×,+/ .* and., respectively.

In theMaple programming language, the multiplication operator* is not defined for Matrices and Vectors. The matrix product is written using., while the Hadamard product may be obtained using*~. (The~ is a 'broadcast operator' similar to. in Julia.)

The R packagematrixcalc introduces the functionhadamard.prod() for Hadamard Product of numeric matrices or vectors.[23]

Applications

[edit]

The Hadamard product appears inlossy compression algorithms such asJPEG. The decoding step involves an entry-for-entry product, in other words the Hadamard product.[citation needed]

Inimage processing, the Hadamard operator can be used for enhancing, suppressing or masking image regions. One matrix represents the original image, the other acts as weight or masking matrix.

It is used in themachine learning literature, for example, to describe the architecture of recurrent neural networks asGRUs orLSTMs.[24]

It is also used to study the statistical properties of random vectors and matrices.[25][26]

The penetrating face product

[edit]
The penetrating face product of matrices

According to the definition ofV. Slyusar the penetrating face product of thep×g matrixA{\displaystyle {A}} andn-dimensional matrixB{\displaystyle {B}} (n > 1) withp×g blocks (B=[Bn]{\displaystyle {B}=[B_{n}]}) is a matrix of sizeB{\displaystyle {B}} of the form:[27]A[]B=[AB1AB2ABn].{\displaystyle {A}[\circ ]{B}=\left[{\begin{array}{c | c | c | c }{A}\circ {B}_{1}&{A}\circ {B}_{2}&\cdots &{A}\circ {B}_{n}\end{array}}\right].}

Example

[edit]

IfA=[123456789],B=[B1B2B3]=[147281431221820510254012306283242739]{\displaystyle {A}={\begin{bmatrix}1&2&3\\4&5&6\\7&8&9\end{bmatrix}},\quad {B}=\left[{\begin{array}{c | c | c }{B}_{1}&{B}_{2}&{B}_{3}\end{array}}\right]=\left[{\begin{array}{c c c | c c c | c c c }1&4&7&2&8&14&3&12&21\\8&20&5&10&25&40&12&30&6\\2&8&3&2&4&2&7&3&9\end{array}}\right]}

then

A[]B=[182121642324633210030401252404815036146427143218492481].{\displaystyle {A}[\circ ]{B}=\left[{\begin{array}{c c c | c c c | c c c }1&8&21&2&16&42&3&24&63\\32&100&30&40&125&240&48&150&36\\14&64&27&14&32&18&49&24&81\end{array}}\right].}

Main properties

[edit]
A[]B=B[]A;{\displaystyle {A}[\circ ]{B}={B}[\circ ]{A};}[27]
MM=M[](M1T),{\displaystyle {M}\bullet {M}={M}[\circ ]\left({M}\otimes \mathbf {1} ^{\textsf {T}}\right),}

where{\displaystyle \bullet } denotes theface-splitting product of matrices,

cM=c[]M,{\displaystyle \mathbf {c} \bullet {M}=\mathbf {c} [\circ ]{M},} wherec{\displaystyle \mathbf {c} } is a vector.

Applications

[edit]

The penetrating face product is used in thetensor-matrix theory ofdigital antenna arrays.[27] This operation can also be used inartificial neural network models, specifically convolutional layers.[28]

See also

[edit]

References

[edit]
  1. ^abHorn, Roger A.; Johnson, Charles R. (2012).Matrix analysis. Cambridge University Press.
  2. ^Davis, Chandler (1962). "The norm of the Schur product operation".Numerische Mathematik.4 (1):343–44.doi:10.1007/bf01386329.S2CID 121027182.
  3. ^abcMillion, Elizabeth (April 12, 2007)."The Hadamard Product"(PDF).buzzard.ups.edu. RetrievedSeptember 6, 2020.
  4. ^"Hadamard product - Machine Learning Glossary".machinelearning.wtf.
  5. ^"linear algebra - What does a dot in a circle mean?".Mathematics Stack Exchange.
  6. ^"Element-wise (or pointwise) operations notation?".Mathematics Stack Exchange.
  7. ^abMillion, Elizabeth."The Hadamard Product"(PDF). Retrieved2 January 2012.
  8. ^abcStyan, George P. H. (1973), "Hadamard Products and Multivariate Statistical Analysis",Linear Algebra and Its Applications,6:217–240,doi:10.1016/0024-3795(73)90023-2,hdl:10338.dmlcz/102190
  9. ^Liu, Shuangzhe; Trenkler, Götz (2008). "Hadamard, Khatri-Rao, Kronecker and other matrix products".International Journal of Information and Systems Sciences.4 (1):160–177.
  10. ^Liu, Shuangzhe; Leiva, Víctor; Zhuang, Dan; Ma, Tiefeng; Figueroa-Zúñiga, Jorge I. (2022)."Matrix differential calculus with applications in the multivariate linear model and its diagnostics".Journal of Multivariate Analysis.188 104849.doi:10.1016/j.jmva.2021.104849.S2CID 239598156.
  11. ^Liu, Shuangzhe; Trenkler, Götz; Kollo, Tõnu; von Rosen, Dietrich; Baksalary, Oskar Maria (2023). "Professor Heinz Neudecker and matrix differential calculus".Statistical Papers.65 (4):2605–2639.doi:10.1007/s00362-023-01499-w.
  12. ^Hiai, Fumio; Lin, Minghua (February 2017)."On an eigenvalue inequality involving the Hadamard product".Linear Algebra and Its Applications.515:313–320.doi:10.1016/j.laa.2016.11.017.
  13. ^"Project"(PDF). buzzard.ups.edu. 2007. Retrieved2019-12-18.
  14. ^Slyusar, V. I. (1998)."End products in matrices in radar applications"(PDF).Radioelectronics and Communications Systems.41 (3):50–53.
  15. ^abReams, Robert (1999)."Hadamard inverses, square roots and products of almost semidefinite matrices".Linear Algebra and Its Applications.288:35–43.doi:10.1016/S0024-3795(98)10162-3.
  16. ^Wetzstein, Gordon; Lanman, Douglas; Hirsch, Matthew; Raskar, Ramesh."Supplementary Material: Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting"(PDF).MIT Media Lab. Archived fromthe original(PDF) on 2021-05-11. Retrieved2016-10-18.
  17. ^Cyganek, Boguslaw (2013).Object Detection and Recognition in Digital Images: Theory and Practice. John Wiley & Sons. p. 109.ISBN 9781118618363.
  18. ^"MATLAB times function".
  19. ^"Array vs. Matrix Operations".
  20. ^"Vectorized "dot" operators". Retrieved31 January 2024.
  21. ^"Dot Syntax for Vectorizing Functions". Retrieved31 January 2024.
  22. ^"Common Matrices — SymPy 1.9 documentation". Archived fromthe original on 2021-08-02. Retrieved2021-05-04.
  23. ^"Matrix multiplication".An Introduction to R. The R Project for Statistical Computing. 16 May 2013. Retrieved24 August 2013.
  24. ^Sak, Haşim; Senior, Andrew; Beaufays, Françoise (2014-02-05). "Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition".arXiv:1402.1128 [cs.NE].
  25. ^Neudecker, Heinz; Liu, Shuangzhe; Polasek, Wolfgang (1995). "The Hadamard product and some of its applications in statistics".Statistics.26 (4):365–373.doi:10.1080/02331889508802503.
  26. ^Neudecker, Heinz; Liu, Shuangzhe (2001). "Some statistical properties of Hadamard products of random matrices".Statistical Papers.42 (4):475–487.doi:10.1007/s003620100074.S2CID 121385730.
  27. ^abcSlyusar, V. I. (March 13, 1998)."A Family of Face Products of Matrices and its properties"(PDF).Cybernetics and Systems Analysis C/C of Kibernetika I Sistemnyi Analiz. 1999.35 (3):379–384.doi:10.1007/BF02733426.S2CID 119661450.
  28. ^Ha D., Dai A.M., Le Q.V. (2017). "HyperNetworks".The International Conference on Learning Representations (ICLR) 2017. – Toulon, 2017.: Page 6.arXiv:1609.09106.{{cite journal}}: CS1 maint: multiple names: authors list (link)
Linear equations
Three dimensional Euclidean space
Matrices
Matrix decompositions
Relations and computations
Vector spaces
Structures
Multilinear algebra
Affine and projective
Numerical linear algebra
Retrieved from "https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=1330665911"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp