Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Moment matrix

From Wikipedia, the free encyclopedia
This article has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages)
This articlemay beconfusing or unclear to readers. In particular, the subject of the article is not defined and the concepts linked in the introduction (e.g.monomial) seem to be not used with their common meaning. Please helpclarify the article. There might be a discussion about this onthe talk page.(April 2021) (Learn how and when to remove this message)
This articlemay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(April 2021) (Learn how and when to remove this message)
This articleprovides insufficient context for those unfamiliar with the subject. Please helpimprove the article byproviding more context for the reader, especially: context is provided only by linking some WP articles that do not mention the subject at all.(April 2021) (Learn how and when to remove this message)
(Learn how and when to remove this message)

Inmathematics, amoment matrix is a special symmetric squarematrix whose rows and columns are indexed bymonomials. The entries of the matrix depend on the product of the indexing monomials only (cf.Hankel matrices.)

Moment matrices play an important role inpolynomial fitting, polynomial optimization (sincepositive semidefinite moment matrices correspond to polynomials which aresums of squares)[1] andeconometrics.[2]

Application in regression

[edit]

A multiplelinear regression model can be written as

y=β0+β1x1+β2x2+βkxk+u{\displaystyle y=\beta _{0}+\beta _{1}x_{1}+\beta _{2}x_{2}+\dots \beta _{k}x_{k}+u}

wherey{\displaystyle y} is the dependent variable,x1,x2,xk{\displaystyle x_{1},x_{2}\dots ,x_{k}} are the independent variables,u{\displaystyle u} is the error, andβ0,β1,βk{\displaystyle \beta _{0},\beta _{1}\dots ,\beta _{k}} are unknown coefficients to be estimated. Given observations{yi,xi1,xi2,,xik}i=1n{\displaystyle \left\{y_{i},x_{i1},x_{i2},\dots ,x_{ik}\right\}_{i=1}^{n}}, we have a system ofn{\displaystyle n} linear equations that can be expressed in matrix notation.[3]

[y1y2yn]=[1x11x12x1k1x21x22x2k1xn1xn2xnk][β0β1βk]+[u1u2un]{\displaystyle {\begin{bmatrix}y_{1}\\y_{2}\\\vdots \\y_{n}\end{bmatrix}}={\begin{bmatrix}1&x_{11}&x_{12}&\dots &x_{1k}\\1&x_{21}&x_{22}&\dots &x_{2k}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n1}&x_{n2}&\dots &x_{nk}\\\end{bmatrix}}{\begin{bmatrix}\beta _{0}\\\beta _{1}\\\vdots \\\beta _{k}\end{bmatrix}}+{\begin{bmatrix}u_{1}\\u_{2}\\\vdots \\u_{n}\end{bmatrix}}}

or

y=Xβ+u{\displaystyle \mathbf {y} =\mathbf {X} {\boldsymbol {\beta }}+\mathbf {u} }

wherey{\displaystyle \mathbf {y} } andu{\displaystyle \mathbf {u} } are each a vector of dimensionn×1{\displaystyle n\times 1},X{\displaystyle \mathbf {X} } is thedesign matrix of orderN×(k+1){\displaystyle N\times (k+1)}, andβ{\displaystyle {\boldsymbol {\beta }}} is a vector of dimension(k+1)×1{\displaystyle (k+1)\times 1}. Under theGauss–Markov assumptions, the best linear unbiased estimator ofβ{\displaystyle {\boldsymbol {\beta }}} is the linearleast squares estimatorb=(XTX)1XTy{\displaystyle \mathbf {b} =\left(\mathbf {X} ^{\mathsf {T}}\mathbf {X} \right)^{-1}\mathbf {X} ^{\mathsf {T}}\mathbf {y} }, involving the two moment matricesXTX{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {X} } andXTy{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {y} } defined as

XTX=[nxi1xi2xikxi1xi12xi1xi2xi1xikxi2xi1xi2xi22xi2xikxikxi1xikxi2xikxik2]{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {X} ={\begin{bmatrix}n&\sum x_{i1}&\sum x_{i2}&\dots &\sum x_{ik}\\\sum x_{i1}&\sum x_{i1}^{2}&\sum x_{i1}x_{i2}&\dots &\sum x_{i1}x_{ik}\\\sum x_{i2}&\sum x_{i1}x_{i2}&\sum x_{i2}^{2}&\dots &\sum x_{i2}x_{ik}\\\vdots &\vdots &\vdots &\ddots &\vdots \\\sum x_{ik}&\sum x_{i1}x_{ik}&\sum x_{i2}x_{ik}&\dots &\sum x_{ik}^{2}\end{bmatrix}}}

and

XTy=[yixi1yixikyi]{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {y} ={\begin{bmatrix}\sum y_{i}\\\sum x_{i1}y_{i}\\\vdots \\\sum x_{ik}y_{i}\end{bmatrix}}}

whereXTX{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {X} } is a squarenormal matrix of dimension(k+1)×(k+1){\displaystyle (k+1)\times (k+1)}, andXTy{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {y} } is a vector of dimension(k+1)×1{\displaystyle (k+1)\times 1}.

See also

[edit]

References

[edit]
  1. ^Lasserre, Jean-Bernard, 1953- (2010).Moments, positive polynomials and their applications. World Scientific (Firm). London: Imperial College Press.ISBN 978-1-84816-446-8.OCLC 624365972.{{cite book}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
  2. ^Goldberger, Arthur S. (1964)."Classical Linear Regression".Econometric Theory. New York: John Wiley & Sons. pp. 156–212.ISBN 0-471-31101-4.
  3. ^Huang, David S. (1970).Regression and Econometric Methods. New York: John Wiley & Sons. pp. 52–65.ISBN 0-471-41754-8.

External links

[edit]
Matrix classes
Explicitly constrained entries
Constant
Conditions oneigenvalues or eigenvectors
Satisfying conditions onproducts orinverses
With specific applications
Used instatistics
Used ingraph theory
Used in science and engineering
Related terms


Stub icon

Thislinear algebra-related article is astub. You can help Wikipedia byexpanding it.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Moment_matrix&oldid=1255969646"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp