Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Kernel regression

From Wikipedia, the free encyclopedia
Not to be confused withKernel principal component analysis orKernel ridge regression.
Technique in statistics

Instatistics,kernel regression is anon-parametric technique to estimate theconditional expectation of arandom variable. The objective is to find a non-linear relation between a pair of random variablesX andY.

In anynonparametric regression, theconditional expectation of a variableY{\displaystyle Y} relative to a variableX{\displaystyle X} may be written:

E(YX)=m(X){\displaystyle \operatorname {E} (Y\mid X)=m(X)}

wherem{\displaystyle m} is an unknown function.

Nadaraya–Watson kernel regression

[edit]

Nadaraya andWatson, both in 1964, proposed to estimatem{\displaystyle m} as a locally weighted average, using akernel as a weighting function.[1][2][3] The Nadaraya–Watson estimator is:

m^h(x)=i=1nKh(xxi)yii=1nKh(xxi){\displaystyle {\widehat {m}}_{h}(x)={\frac {\sum _{i=1}^{n}K_{h}(x-x_{i})y_{i}}{\sum _{i=1}^{n}K_{h}(x-x_{i})}}}

whereKh(t)=1hK(th){\displaystyle K_{h}(t)={\frac {1}{h}}K\left({\frac {t}{h}}\right)} is a kernel with a bandwidthh{\displaystyle h} such thatK(){\displaystyle K(\cdot )} is of order at least 1, that isuK(u)du=0{\displaystyle \int _{-\infty }^{\infty }uK(u)\,du=0}.

Derivation

[edit]

Starting with the definition ofconditional expectation,

E(YX=x)=yf(yx)dy=yf(x,y)f(x)dy{\displaystyle \operatorname {E} (Y\mid X=x)=\int yf(y\mid x)\,dy=\int y{\frac {f(x,y)}{f(x)}}\,dy}

we estimate the joint distributionsf(x,y) andf(x) usingkernel density estimation with a kernelK:

f^(x,y)=1ni=1nKh(xxi)Kh(yyi),{\displaystyle {\hat {f}}(x,y)={\frac {1}{n}}\sum _{i=1}^{n}K_{h}(x-x_{i})K_{h}(y-y_{i}),}
f^(x)=1ni=1nKh(xxi),{\displaystyle {\hat {f}}(x)={\frac {1}{n}}\sum _{i=1}^{n}K_{h}(x-x_{i}),}

We get:

E^(YX=x)=yf^(x,y)f^(x)dy,=yi=1nKh(xxi)Kh(yyi)j=1nKh(xxj)dy,=i=1nKh(xxi)yKh(yyi)dyj=1nKh(xxj),=i=1nKh(xxi)yij=1nKh(xxj),{\displaystyle {\begin{aligned}\operatorname {\hat {E}} (Y\mid X=x)&=\int y{\frac {{\hat {f}}(x,y)}{{\hat {f}}(x)}}\,dy,\\[6pt]&=\int y{\frac {\sum _{i=1}^{n}K_{h}(x-x_{i})K_{h}(y-y_{i})}{\sum _{j=1}^{n}K_{h}(x-x_{j})}}\,dy,\\[6pt]&={\frac {\sum _{i=1}^{n}K_{h}(x-x_{i})\int y\,K_{h}(y-y_{i})\,dy}{\sum _{j=1}^{n}K_{h}(x-x_{j})}},\\[6pt]&={\frac {\sum _{i=1}^{n}K_{h}(x-x_{i})y_{i}}{\sum _{j=1}^{n}K_{h}(x-x_{j})}},\end{aligned}}}

which is the Nadaraya–Watson estimator.

Priestley–Chao kernel estimator

[edit]
m^PC(x)=h1i=2n(xixi1)K(xxih)yi{\displaystyle {\widehat {m}}_{PC}(x)=h^{-1}\sum _{i=2}^{n}(x_{i}-x_{i-1})K\left({\frac {x-x_{i}}{h}}\right)y_{i}}

whereh{\displaystyle h} is the bandwidth (or smoothing parameter).

Gasser–Müller kernel estimator

[edit]
m^GM(x)=h1i=1n[si1siK(xuh)du]yi{\displaystyle {\widehat {m}}_{GM}(x)=h^{-1}\sum _{i=1}^{n}\left[\int _{s_{i-1}}^{s_{i}}K\left({\frac {x-u}{h}}\right)\,du\right]y_{i}}

wheresi=xi1+xi2.{\displaystyle s_{i}={\frac {x_{i-1}+x_{i}}{2}}.}[4]

Example

[edit]
Estimated regression function.

This example is based upon Canadian cross-section wage data consisting of a random sample taken from the 1971 Canadian Census Public Use Tapes for male individuals having common education (grade 13). There are 205 observations in total.[citation needed]

The figure to the right shows the estimated regression function using a second order Gaussian kernel along with asymptotic variability bounds.

Script for example

[edit]

The following commands of theR programming language use thenpreg() function to deliver optimal smoothing and to create the figure given above. These commands can be entered at the command prompt via cut and paste.

install.packages("np")library(np)# non parametric librarydata(cps71)attach(cps71)m<-npreg(logwage~age)plot(m,plot.errors.method="asymptotic",plot.errors.style="band",ylim=c(11,15.2))points(age,logwage,cex=.25)detach(cps71)

Related

[edit]

According toDavid Salsburg, the algorithms used in kernel regression were independently developed and used infuzzy systems: "Coming up with almost exactly the same computer algorithm, fuzzy systems and kernel density-based regressions appear to have been developed completely independently of one another."[5]

Statistical implementation

[edit]

See also

[edit]

References

[edit]
  1. ^Nadaraya, E. A. (1964). "On Estimating Regression".Theory of Probability and Its Applications.9 (1):141–2.doi:10.1137/1109020.
  2. ^Watson, G. S. (1964). "Smooth regression analysis".Sankhyā: The Indian Journal of Statistics, Series A.26 (4):359–372.JSTOR 25049340.
  3. ^Bierens, Herman J. (1994)."The Nadaraya–Watson kernel regression function estimator".Topics in Advanced Econometrics. New York: Cambridge University Press. pp. 212–247.ISBN 0-521-41900-X.
  4. ^Gasser, Theo; Müller, Hans-Georg (1979). "Kernel estimation of regression functions".Smoothing techniques for curve estimation (Proc. Workshop, Heidelberg, 1979). Lecture Notes in Math. Vol. 757. Springer, Berlin. pp. 23–68.ISBN 3-540-09706-6.MR 0564251.
  5. ^Salsburg, D. (2002).The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century. W.H. Freeman. pp. 290–91.ISBN 0-8050-7134-2.
  6. ^Horová, I.; Koláček, J.; Zelinka, J. (2012).Kernel Smoothing in MATLAB: Theory and Practice of Kernel Smoothing. Singapore: World Scientific Publishing.ISBN 978-981-4405-48-5.
  7. ^np: Nonparametric kernel smoothing methods for mixed data types
  8. ^Kloke, John; McKean, Joseph W. (2014).Nonparametric Statistical Methods Using R. CRC Press. pp. 98–106.ISBN 978-1-4398-7343-4.

Further reading

[edit]

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Kernel_regression&oldid=1304219114"
Category:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp