Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Partition of sums of squares

From Wikipedia, the free encyclopedia
(Redirected fromSum of squares (statistics))
Concept that permeates much of inferential statistics and descriptive statistics
This article is about the partition of sums of squares in statistics. For other uses, seeSum of squares.
For broader coverage of this topic, seeAnalysis of variance.
"Variance partitioning" redirects here; not to be confused withVariance decomposition.

Thepartition of sums of squares is a concept that permeates much ofinferential statistics anddescriptive statistics. More properly, it is thepartitioning of sums ofsquared deviations or errors. Mathematically, the sum of squared deviations is an unscaled, or unadjusted measure ofdispersion (also calledvariability). When scaled for the number ofdegrees of freedom, it estimates thevariance, or spread of the observations about their mean value. Partitioning of the sum of squared deviations into various components allows the overall variability in a dataset to be ascribed to different types or sources of variability, with the relative importance of each being quantified by the size of each component of the overall sum of squares.

Background

[edit]

The distance from any point in a collection of data, to the mean of the data, is the deviation. This can be written asyiy¯{\displaystyle y_{i}-{\overline {y}}}, whereyi{\displaystyle y_{i}} is the ith data point, andy¯{\displaystyle {\overline {y}}} is the estimate of the mean. If all such deviations are squared, then summed, as ini=1n(yiy¯)2{\displaystyle \sum _{i=1}^{n}\left(y_{i}-{\overline {y}}\,\right)^{2}}, this gives the "sum of squares" for these data.

When more data are added to the collection the sum of squares will increase, except in unlikely cases such as the new data being equal to the mean. So usually, the sum of squares will grow with the size of the data collection. That is a manifestation of the fact that it is unscaled.

In many cases, the number ofdegrees of freedom is simply the number of data points in the collection, minus one. We write this asn − 1, wheren is the number of data points.

Scaling (also known as normalizing) means adjusting the sum of squares so that it does not grow as the size of the data collection grows. This is important when we want to compare samples of different sizes, such as a sample of 100 people compared to a sample of 20 people. If the sum of squares were not normalized, its value would always be larger for the sample of 100 people than for the sample of 20 people. To scale the sum of squares, we divide it by the degrees of freedom, i.e., calculate the sum of squares per degree of freedom, or variance.Standard deviation, in turn, is thesquare root of the variance.

The above describes how the sum of squares is used in descriptive statistics; see the article ontotal sum of squares for an application of this broad principle toinferential statistics.

Partitioning the sum of squares in linear regression

[edit]

Theorem. Given alinear regression modelyi=β0+β1xi1++βpxip+εi{\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i1}+\cdots +\beta _{p}x_{ip}+\varepsilon _{i}}including a constantβ0{\displaystyle \beta _{0}}, based on a sample(yi,xi1,,xip),i=1,,n{\displaystyle (y_{i},x_{i1},\ldots ,x_{ip}),\,i=1,\ldots ,n} containingn observations, the total sum of squaresTSS=i=1n(yiy¯)2{\displaystyle \mathrm {TSS} =\sum _{i=1}^{n}(y_{i}-{\bar {y}})^{2}} can be partitioned as follows into theexplained sum of squares (ESS) and theresidual sum of squares (RSS):

TSS=ESS+RSS,{\displaystyle \mathrm {TSS} =\mathrm {ESS} +\mathrm {RSS} ,}

where this equation is equivalent to each of the following forms:

yy¯12=y^y¯12+ε^2,1=(1,1,,1)T,i=1n(yiy¯)2=i=1n(y^iy¯)2+i=1n(yiy^i)2,i=1n(yiy¯)2=i=1n(y^iy¯)2+i=1nε^i2,{\displaystyle {\begin{aligned}\left\|y-{\bar {y}}\mathbf {1} \right\|^{2}&=\left\|{\hat {y}}-{\bar {y}}\mathbf {1} \right\|^{2}+\left\|{\hat {\varepsilon }}\right\|^{2},\quad \mathbf {1} =(1,1,\ldots ,1)^{T},\\\sum _{i=1}^{n}(y_{i}-{\bar {y}})^{2}&=\sum _{i=1}^{n}({\hat {y}}_{i}-{\bar {y}})^{2}+\sum _{i=1}^{n}(y_{i}-{\hat {y}}_{i})^{2},\\\sum _{i=1}^{n}(y_{i}-{\bar {y}})^{2}&=\sum _{i=1}^{n}({\hat {y}}_{i}-{\bar {y}})^{2}+\sum _{i=1}^{n}{\hat {\varepsilon }}_{i}^{2},\\\end{aligned}}}
wherey^i{\displaystyle {\hat {y}}_{i}} is the value estimated by the regression line havingb^0{\displaystyle {\hat {b}}_{0}},b^1{\displaystyle {\hat {b}}_{1}}, ...,b^p{\displaystyle {\hat {b}}_{p}} as the estimatedcoefficients.[1]

Proof

[edit]
i=1n(yiy¯)2=i=1n(yiy¯+y^iy^i)2=i=1n((y^iy¯)+(yiy^i)ε^i)2=i=1n((y^iy¯)2+2ε^i(y^iy¯)+ε^i2)=i=1n(y^iy¯)2+i=1nε^i2+2i=1nε^i(y^iy¯)=i=1n(y^iy¯)2+i=1nε^i2+2i=1nε^i(β^0+β^1xi1++β^pxipy¯)=i=1n(y^iy¯)2+i=1nε^i2+2(β^0y¯)i=1nε^i0+2β^1i=1nε^ixi10++2β^pi=1nε^ixip0=i=1n(y^iy¯)2+i=1nε^i2=ESS+RSS{\displaystyle {\begin{aligned}\sum _{i=1}^{n}(y_{i}-{\overline {y}})^{2}&=\sum _{i=1}^{n}(y_{i}-{\overline {y}}+{\hat {y}}_{i}-{\hat {y}}_{i})^{2}=\sum _{i=1}^{n}(({\hat {y}}_{i}-{\bar {y}})+\underbrace {(y_{i}-{\hat {y}}_{i})} _{{\hat {\varepsilon }}_{i}})^{2}\\&=\sum _{i=1}^{n}(({\hat {y}}_{i}-{\bar {y}})^{2}+2{\hat {\varepsilon }}_{i}({\hat {y}}_{i}-{\bar {y}})+{\hat {\varepsilon }}_{i}^{2})\\&=\sum _{i=1}^{n}({\hat {y}}_{i}-{\bar {y}})^{2}+\sum _{i=1}^{n}{\hat {\varepsilon }}_{i}^{2}+2\sum _{i=1}^{n}{\hat {\varepsilon }}_{i}({\hat {y}}_{i}-{\bar {y}})\\&=\sum _{i=1}^{n}({\hat {y}}_{i}-{\bar {y}})^{2}+\sum _{i=1}^{n}{\hat {\varepsilon }}_{i}^{2}+2\sum _{i=1}^{n}{\hat {\varepsilon }}_{i}({\hat {\beta }}_{0}+{\hat {\beta }}_{1}x_{i1}+\cdots +{\hat {\beta }}_{p}x_{ip}-{\overline {y}})\\&=\sum _{i=1}^{n}({\hat {y}}_{i}-{\bar {y}})^{2}+\sum _{i=1}^{n}{\hat {\varepsilon }}_{i}^{2}+2({\hat {\beta }}_{0}-{\overline {y}})\underbrace {\sum _{i=1}^{n}{\hat {\varepsilon }}_{i}} _{0}+2{\hat {\beta }}_{1}\underbrace {\sum _{i=1}^{n}{\hat {\varepsilon }}_{i}x_{i1}} _{0}+\cdots +2{\hat {\beta }}_{p}\underbrace {\sum _{i=1}^{n}{\hat {\varepsilon }}_{i}x_{ip}} _{0}\\&=\sum _{i=1}^{n}({\hat {y}}_{i}-{\bar {y}})^{2}+\sum _{i=1}^{n}{\hat {\varepsilon }}_{i}^{2}=\mathrm {ESS} +\mathrm {RSS} \\\end{aligned}}}

The requirement that the model include a constant or equivalently that thedesign matrix contain a column of ones ensures thati=1nε^i=0{\displaystyle \sum _{i=1}^{n}{\hat {\varepsilon }}_{i}=0}, i.e.ε^T1=0{\displaystyle {\hat {\varepsilon }}^{T}\mathbf {1} =0}.

The proof can also be expressed in vector form, as follows:

SStotal=yy¯12=yy¯1+y^y^2,=(y^y¯1)+(yy^)2,=y^y¯12+ε^2+2ε^T(y^y¯1),=SSregression+SSerror+2ε^T(Xβ^y¯1),=SSregression+SSerror+2(ε^TX)β^2y¯ε^T10,=SSregression+SSerror.{\displaystyle {\begin{aligned}SS_{\text{total}}=\Vert \mathbf {y} -{\bar {y}}\mathbf {1} \Vert ^{2}&=\Vert \mathbf {y} -{\bar {y}}\mathbf {1} +\mathbf {\hat {y}} -\mathbf {\hat {y}} \Vert ^{2},\\&=\Vert \left(\mathbf {\hat {y}} -{\bar {y}}\mathbf {1} \right)+\left(\mathbf {y} -\mathbf {\hat {y}} \right)\Vert ^{2},\\&=\Vert {\mathbf {\hat {y}} -{\bar {y}}\mathbf {1} }\Vert ^{2}+\Vert {\hat {\varepsilon }}\Vert ^{2}+2{\hat {\varepsilon }}^{T}\left(\mathbf {\hat {y}} -{\bar {y}}\mathbf {1} \right),\\&=SS_{\text{regression}}+SS_{\text{error}}+2{\hat {\varepsilon }}^{T}\left(X{\hat {\beta }}-{\bar {y}}\mathbf {1} \right),\\&=SS_{\text{regression}}+SS_{\text{error}}+2\left({\hat {\varepsilon }}^{T}X\right){\hat {\beta }}-2{\bar {y}}\underbrace {{\hat {\varepsilon }}^{T}\mathbf {1} } _{0},\\&=SS_{\text{regression}}+SS_{\text{error}}.\end{aligned}}}

The elimination of terms in the last line, used the fact that

ε^TX=(yy^)TX=yT(IX(XTX)1XT)TX=yT(XTXT)T=0.{\displaystyle {\hat {\varepsilon }}^{T}X=\left(\mathbf {y} -\mathbf {\hat {y}} \right)^{T}X=\mathbf {y} ^{T}(I-X(X^{T}X)^{-1}X^{T})^{T}X={\mathbf {y} }^{T}(X^{T}-X^{T})^{T}={\mathbf {0} }.}

Further partitioning

[edit]

Note that the residual sum of squares can be further partitioned as thelack-of-fit sum of squares plus the sum of squares due to pure error.

See also

[edit]

References

[edit]
  1. ^"Sum of Squares - Definition, Formulas, Regression Analysis".Corporate Finance Institute. Retrieved2020-10-16.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Partition_of_sums_of_squares&oldid=1305882814"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp