Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Simple linear regression

From Wikipedia, the free encyclopedia
Linear regression model with a single explanatory variable
Okun's law inmacroeconomics is an example of the simple linear regression. Here the dependent variable (GDP growth) is presumed to be in a linear relationship with the changes in the unemployment rate.
Part of a series on
Regression analysis
Models
Estimation
Background

Instatistics,simple linear regression (SLR) is alinear regression model with a singleexplanatory variable.[1][2][3][4][5] That is, it concerns two-dimensional sample points withone independent variable and one dependent variable (conventionally, thex andy coordinates in aCartesian coordinate system) and finds a linear function (a non-verticalstraight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable.The adjectivesimple refers to the fact that the outcome variable is related to a single predictor.

It is common to make the additional stipulation that theordinary least squares (OLS) method should be used: the accuracy of each predicted value is measured by its squaredresidual (vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. In this case, the slope of the fitted line is equal to thecorrelation betweeny andx corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass(x,y) of the data points.

Formulation and computation

[edit]

Consider themodel functiony=α+βx,{\displaystyle y=\alpha +\beta x,}which describes a line with slopeβ andy-interceptα. In general, such a relationship may not hold exactly for the largely unobserved population of values of the independent and dependent variables; we call the unobserved deviations from the above equation theerrors. Suppose we observen data pairs and call them{(xi,yi),i = 1, ...,n}. We can describe the underlying relationship betweenyi andxi involving this error termεi by

yi=α+βxi+εi.{\displaystyle y_{i}=\alpha +\beta x_{i}+\varepsilon _{i}.}

This relationship between the true (but unobserved) underlying parametersα andβ and the data points is called a linear regression model.

The goal is to find estimated valuesα^{\displaystyle {\widehat {\alpha }}} andβ^{\displaystyle {\widehat {\beta }}} for the parametersα andβ which would provide the "best" fit in some sense for the data points. As mentioned in the introduction, in this article the "best" fit will be understood as in theleast-squares approach: a line that minimizes thesum of squared residuals (see alsoErrors and residuals)ε^i{\displaystyle {\widehat {\varepsilon }}_{i}} (differences between actual and predicted values of the dependent variabley), each of which is given by, for any candidate parameter valuesα{\displaystyle \alpha } andβ{\displaystyle \beta },

ε^i=yiαβxi.{\displaystyle {\widehat {\varepsilon }}_{i}=y_{i}-\alpha -\beta x_{i}.}

In other words,α^{\displaystyle {\widehat {\alpha }}} andβ^{\displaystyle {\widehat {\beta }}} solve the followingminimization problem:

(α^,β^)=argmin(Q(α,β)),{\displaystyle ({\hat {\alpha }},\,{\hat {\beta }})=\operatorname {argmin} \left(Q(\alpha ,\beta )\right),}where theobjective functionQ is:Q(α,β)=i=1nε^i2=i=1n(yiαβxi)2 .{\displaystyle Q(\alpha ,\beta )=\sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}^{\,2}=\sum _{i=1}^{n}(y_{i}-\alpha -\beta x_{i})^{2}\ .}

By expanding to get a quadratic expression inα{\displaystyle \alpha } andβ,{\displaystyle \beta ,} we can derive minimizing values of the function arguments, denotedα^{\displaystyle {\widehat {\alpha }}} andβ^{\displaystyle {\widehat {\beta }}}:[6]

α^=y¯β^x¯,β^=i=1n(xix¯)(yiy¯)i=1n(xix¯)2=i=1nΔxiΔyii=1nΔxi2{\displaystyle {\begin{aligned}{\widehat {\alpha }}&={\bar {y}}-{\widehat {\beta }}\,{\bar {x}},\\[5pt]{\widehat {\beta }}&={\frac {\sum _{i=1}^{n}\left(x_{i}-{\bar {x}}\right)\left(y_{i}-{\bar {y}}\right)}{\sum _{i=1}^{n}\left(x_{i}-{\bar {x}}\right)^{2}}}={\frac {\sum _{i=1}^{n}\Delta x_{i}\Delta y_{i}}{\sum _{i=1}^{n}\Delta x_{i}^{2}}}\end{aligned}}}

Here we have introduced

Expanded formulas

[edit]

The above equations are efficient to use if the mean of the x and y variables (x¯ and y¯{\displaystyle {\bar {x}}{\text{ and }}{\bar {y}}}) are known. If the means are not known at the time of calculation, it may be more efficient to use the expanded version of theα^ and β^{\displaystyle {\widehat {\alpha }}{\text{ and }}{\widehat {\beta }}} equations. These expanded equations may be derived from the more generalpolynomial regression equations[7][8] by defining the regression polynomial to be of order 1, as follows.

[ni=1nxii=1nxii=1nxi2][α^β^]=[i=1nyii=1nyixi]{\displaystyle {\begin{bmatrix}n&\sum _{i=1}^{n}x_{i}\\[1ex]\sum _{i=1}^{n}x_{i}&\sum _{i=1}^{n}x_{i}^{2}\end{bmatrix}}{\begin{bmatrix}{\widehat {\alpha }}\\[1ex]{\widehat {\beta }}\end{bmatrix}}={\begin{bmatrix}\sum _{i=1}^{n}y_{i}\\[1ex]\sum _{i=1}^{n}y_{i}x_{i}\end{bmatrix}}}

The abovesystem of linear equations may be solved directly, or stand-alone equations forα^ and β^{\displaystyle {\widehat {\alpha }}{\text{ and }}{\widehat {\beta }}} may be derived by expanding the matrix equations above. The resultant equations are algebraically equivalent to the ones shown in the prior paragraph, and are shown below without proof.[9][7]

α^=i=1nyii=1nxi2i=1nxii=1nxiyini=1nxi2(i=1nxi)2β^=ni=1nxiyii=1nxii=1nyini=1nxi2(i=1nxi)2{\displaystyle {\begin{aligned}{\widehat {\alpha }}&={\frac {\sum \limits _{i=1}^{n}y_{i}\sum \limits _{i=1}^{n}x_{i}^{2}-\sum \limits _{i=1}^{n}x_{i}\sum \limits _{i=1}^{n}x_{i}y_{i}}{n\sum \limits _{i=1}^{n}x_{i}^{2}-\left(\sum \limits _{i=1}^{n}x_{i}\right)^{2}}}\\[2ex]{\widehat {\beta }}&={\frac {n\sum \limits _{i=1}^{n}x_{i}y_{i}-\sum \limits _{i=1}^{n}x_{i}\sum \limits _{i=1}^{n}y_{i}}{n\sum \limits _{i=1}^{n}x_{i}^{2}-\left(\sum \limits _{i=1}^{n}x_{i}\right)^{2}}}\end{aligned}}}

Interpretation

[edit]

Relationship with the sample covariance matrix

[edit]

The solution can be reformulated using elements of thecovariance matrix:β^=sx,ysx2=rxysysx{\displaystyle {\widehat {\beta }}={\frac {s_{x,y}}{s_{x}^{2}}}=r_{xy}{\frac {s_{y}}{s_{x}}}}

where

Substituting the above expressions forα^{\displaystyle {\widehat {\alpha }}} andβ^{\displaystyle {\widehat {\beta }}} into the original solution yields

yy¯sy=rxyxx¯sx.{\displaystyle {\frac {y-{\bar {y}}}{s_{y}}}=r_{xy}{\frac {x-{\bar {x}}}{s_{x}}}.}

This shows thatrxy is the slope of the regression line of thestandardized data points (and that this line passes through the origin). Since1rxy1{\displaystyle -1\leq r_{xy}\leq 1} then we get that if x is some measurement and y is a followup measurement from the same item, then we expect that y (on average) will be closer to the mean measurement than it was to the original value of x. This phenomenon is known asregressions toward the mean.

Generalizing thex¯{\displaystyle {\bar {x}}} notation, we can write a horizontal bar over an expression to indicate the average value of that expression over the set of samples. For example:

xy¯=1ni=1nxiyi.{\displaystyle {\overline {xy}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}y_{i}.}

This notation allows us a concise formula forrxy:

rxy=xy¯x¯y¯(x2¯x¯2)(y2¯y¯2).{\displaystyle r_{xy}={\frac {{\overline {xy}}-{\bar {x}}{\bar {y}}}{\sqrt {\left({\overline {x^{2}}}-{\bar {x}}^{2}\right)\left({\overline {y^{2}}}-{\bar {y}}^{2}\right)}}}.}

Thecoefficient of determination ("R squared") is equal torxy2{\displaystyle r_{xy}^{2}} when the model is linear with a single independent variable. Seesample correlation coefficient for additional details.

Interpretation about the slope

[edit]

By multiplying all members of the summation in the numerator by :xix¯xix¯=1{\displaystyle {\frac {x_{i}-{\bar {x}}}{x_{i}-{\bar {x}}}}=1} (thereby not changing it):

β^=i=1n(xix¯)(yiy¯)i=1n(xix¯)2=i=1n(xix¯)2yiy¯xix¯i=1n(xix¯)2=i=1n(xix¯)2j=1n(xjx¯)2yiy¯xix¯{\displaystyle {\begin{aligned}{\widehat {\beta }}&={\frac {\sum _{i=1}^{n}\left(x_{i}-{\bar {x}}\right)\left(y_{i}-{\bar {y}}\right)}{\sum _{i=1}^{n}\left(x_{i}-{\bar {x}}\right)^{2}}}\\[1ex]&={\frac {\sum _{i=1}^{n}\left(x_{i}-{\bar {x}}\right)^{2}{\frac {y_{i}-{\bar {y}}}{x_{i}-{\bar {x}}}}}{\sum _{i=1}^{n}\left(x_{i}-{\bar {x}}\right)^{2}}}\\[1ex]&=\sum _{i=1}^{n}{\frac {\left(x_{i}-{\bar {x}}\right)^{2}}{\sum _{j=1}^{n}\left(x_{j}-{\bar {x}}\right)^{2}}}{\frac {y_{i}-{\bar {y}}}{x_{i}-{\bar {x}}}}\\[6pt]\end{aligned}}}

We can see that the slope (tangent of angle) of the regression line is the weighted average ofyiy¯xix¯{\displaystyle {\frac {y_{i}-{\bar {y}}}{x_{i}-{\bar {x}}}}} that is the slope (tangent of angle) of the line that connects the i-th point to the average of all points, weighted by(xix¯)2{\displaystyle (x_{i}-{\bar {x}})^{2}} because the further the point is the more "important" it is, since small errors in its position will affect the slope connecting it to the center point more.

Interpretation about the intercept

[edit]

The parameterα^{\displaystyle {\widehat {\alpha }}} is the intercept of the linear functiony=α^ +β^x,{\displaystyle {\begin{aligned}{y}&={\widehat {\alpha }}\ +{\widehat {\beta }}\,{x},\\[5pt]\end{aligned}}}. Therefore, they{\displaystyle {y}}-intercept of the function found with simple linear regression is

yintercept=α^=y¯β^x¯{\displaystyle y_{\rm {intercept}}={\widehat {\alpha }}={\bar {y}}-{\widehat {\beta }}\,{\bar {x}}}.


Becauseβ^{\displaystyle {\widehat {\beta }}} is the slope of the linear function,β^=tan(θ){\displaystyle {\widehat {\beta }}=\tan(\theta )}. Therefore, the angleθ{\displaystyle \theta } the graph of the function makes with thex{\displaystyle {x}} axis is equal to

θ=arctan(β^){\displaystyle \theta =\arctan({\widehat {\beta }})}.

Interpretation about the correlation

[edit]

In the above formulation, notice that eachxi{\displaystyle x_{i}} is a constant ("known upfront") value, while theyi{\displaystyle y_{i}} are random variables that depend on the linear function ofxi{\displaystyle x_{i}} and the random termεi{\displaystyle \varepsilon _{i}}. This assumption is used when deriving the standard error of the slope and showing that it isunbiased.

In this framing, whenxi{\displaystyle x_{i}} is not actually arandom variable, what type of parameter does the empirical correlationrxy{\displaystyle r_{xy}} estimate? The issue is that for each value i we'll have:E(xi)=xi{\displaystyle E(x_{i})=x_{i}} andVar(xi)=0{\displaystyle Var(x_{i})=0}. A possible interpretation ofrxy{\displaystyle r_{xy}} is to imagine thatxi{\displaystyle x_{i}} defines a random variable drawn from theempirical distribution of the x values in our sample. For example, if x had 10 values from thenatural numbers: [1,2,3...,10], then we can imagine x to be aDiscrete uniform distribution. Under this interpretation allxi{\displaystyle x_{i}} have the same expectation and some positive variance. With this interpretation we can think ofrxy{\displaystyle r_{xy}} as the estimator of thePearson's correlation between the random variable y and the random variable x (as we just defined it).

Numerical properties

[edit]
  1. The regression line goes through thecenter of mass point,(x¯,y¯){\displaystyle ({\bar {x}},\,{\bar {y}})}, if the model includes an intercept term (i.e., not forced through the origin).
  2. The sum of the residuals is zero if the model includes an intercept term:i=1nε^i=0.{\displaystyle \sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}=0.}
  3. The residuals andx values are uncorrelated (whether or not there is an intercept term in the model), meaning:i=1nxiε^i=0{\displaystyle \sum _{i=1}^{n}x_{i}{\widehat {\varepsilon }}_{i}\;=\;0}
  4. The relationship betweenρxy{\displaystyle \rho _{xy}} (thecorrelation coefficient for the population) and the population variances ofy{\displaystyle y} (σy2{\displaystyle \sigma _{y}^{2}}) and the error term ofε{\displaystyle \varepsilon } (σε2{\displaystyle \sigma _{\varepsilon }^{2}}) is:[10]: 401 σε2=(1ρxy2)σy2{\displaystyle \sigma _{\varepsilon }^{2}=(1-\rho _{xy}^{2})\sigma _{y}^{2}}For extreme values ofρxy{\displaystyle \rho _{xy}} this is self evident. Since whenρxy=0{\displaystyle \rho _{xy}=0} thenσε2=σy2{\displaystyle \sigma _{\varepsilon }^{2}=\sigma _{y}^{2}}. And whenρxy=1{\displaystyle \rho _{xy}=1} thenσε2=0{\displaystyle \sigma _{\varepsilon }^{2}=0}.

Statistical properties

[edit]

Description of the statistical properties of estimators from the simple linear regression estimates requires the use of astatistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such asinhomogeneity, but this is discussed elsewhere.[clarification needed]

Unbiasedness

[edit]

The estimatorsα^{\displaystyle {\widehat {\alpha }}} andβ^{\displaystyle {\widehat {\beta }}} areunbiased.

To formalize this assertion we must define a framework in which these estimators are random variables. We consider the residualsεi as random variables drawn independently from some distribution with mean zero. In other words, for each value ofx, the corresponding value ofy is generated as a mean responseα +βx plus an additional random variableε called theerror term, equal to zero on average. Under such interpretation, the least-squares estimatorsα^{\displaystyle {\widehat {\alpha }}} andβ^{\displaystyle {\widehat {\beta }}} will themselves be random variables whose means will equal the "true values"α andβ. This is the definition of an unbiased estimator.

Variance of the mean response

[edit]

Since the data in this context is defined to be (x,y) pairs for every observation, themean response at a given value ofx, sayxd, is an estimate of the mean of they values in the population at thex value ofxd, that isE^(yxd)y^d{\displaystyle {\hat {E}}(y\mid x_{d})\equiv {\hat {y}}_{d}\!}. The variance of the mean response is given by:[11]

Var(α^+β^xd)=Var(α^)+(Varβ^)xd2+2xdCov(α^,β^).{\displaystyle \operatorname {Var} \left({\hat {\alpha }}+{\hat {\beta }}x_{d}\right)=\operatorname {Var} \left({\hat {\alpha }}\right)+\left(\operatorname {Var} {\hat {\beta }}\right)x_{d}^{2}+2x_{d}\operatorname {Cov} \left({\hat {\alpha }},{\hat {\beta }}\right).}

This expression can be simplified to

Var(α^+β^xd)=σ2(1m+(xdx¯)2(xix¯)2),{\displaystyle \operatorname {Var} \left({\hat {\alpha }}+{\hat {\beta }}x_{d}\right)=\sigma ^{2}\left({\frac {1}{m}}+{\frac {\left(x_{d}-{\bar {x}}\right)^{2}}{\sum (x_{i}-{\bar {x}})^{2}}}\right),}

wherem is the number of data points.

To demonstrate this simplification, one can make use of the identity

i(xix¯)2=ixi21m(ixi)2.{\displaystyle \sum _{i}(x_{i}-{\bar {x}})^{2}=\sum _{i}x_{i}^{2}-{\frac {1}{m}}\left(\sum _{i}x_{i}\right)^{2}.}

Variance of the predicted response

[edit]
Further information:Prediction interval

Thepredicted response distribution is the predicted distribution of the residuals at the given pointxd. So the variance is given by

Var(yd[α^+β^xd])=Var(yd)+Var(α^+β^xd)2Cov(yd,[α^+β^xd])=Var(yd)+Var(α^+β^xd).{\displaystyle {\begin{aligned}\operatorname {Var} \left(y_{d}-\left[{\hat {\alpha }}+{\hat {\beta }}x_{d}\right]\right)&=\operatorname {Var} (y_{d})+\operatorname {Var} \left({\hat {\alpha }}+{\hat {\beta }}x_{d}\right)-2\operatorname {Cov} \left(y_{d},\left[{\hat {\alpha }}+{\hat {\beta }}x_{d}\right]\right)\\&=\operatorname {Var} (y_{d})+\operatorname {Var} \left({\hat {\alpha }}+{\hat {\beta }}x_{d}\right).\end{aligned}}}

The second line follows from the fact thatCov(yd,[α^+β^xd]){\displaystyle \operatorname {Cov} \left(y_{d},\left[{\hat {\alpha }}+{\hat {\beta }}x_{d}\right]\right)} is zero because the new prediction point is independent of the data used to fit the model. Additionally, the termVar(α^+β^xd){\displaystyle \operatorname {Var} \left({\hat {\alpha }}+{\hat {\beta }}x_{d}\right)} was calculated earlier for the mean response.

SinceVar(yd)=σ2{\displaystyle \operatorname {Var} (y_{d})=\sigma ^{2}} (a fixed but unknown parameter that can be estimated), the variance of the predicted response is given by

Var(yd[α^+β^xd])=σ2+σ2(1m+(xdx¯)2(xix¯)2)=σ2(1+1m+(xdx¯)2(xix¯)2).{\displaystyle {\begin{aligned}\operatorname {Var} \left(y_{d}-\left[{\hat {\alpha }}+{\hat {\beta }}x_{d}\right]\right)&=\sigma ^{2}+\sigma ^{2}\left({\frac {1}{m}}+{\frac {\left(x_{d}-{\bar {x}}\right)^{2}}{\sum (x_{i}-{\bar {x}})^{2}}}\right)\\[4pt]&=\sigma ^{2}\left(1+{\frac {1}{m}}+{\frac {(x_{d}-{\bar {x}})^{2}}{\sum (x_{i}-{\bar {x}})^{2}}}\right).\end{aligned}}}

Confidence intervals

[edit]

The formulas given in the previous section allow one to calculate thepoint estimates ofα andβ — that is, the coefficients of the regression line for the given set of data. However, those formulas do not tell us how precise the estimates are, i.e., how much the estimatorsα^{\displaystyle {\widehat {\alpha }}} andβ^{\displaystyle {\widehat {\beta }}} vary from sample to sample for the specified sample size.Confidence intervals were devised to give a plausible set of values to the estimates one might have if one repeated the experiment a very large number of times.

The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either:

  1. the errors in the regression arenormally distributed (the so-calledclassic regression assumption), or
  2. the number of observationsn is sufficiently large, in which case the estimator is approximately normally distributed.

The latter case is justified by thecentral limit theorem.

Normality assumption

[edit]

Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with meanβ and varianceσ2/i(xix¯)2,{\textstyle \sigma ^{2}\left/\sum _{i}(x_{i}-{\bar {x}})^{2}\right.,} whereσ2 is the variance of the error terms (seeProofs involving ordinary least squares). At the same time the sum of squared residualsQ is distributed proportionally toχ2 withn − 2 degrees of freedom, and independently fromβ^{\displaystyle {\widehat {\beta }}}. This allows us to construct at-value

t=β^βsβ^  tn2,{\displaystyle t={\frac {{\widehat {\beta }}-\beta }{s_{\widehat {\beta }}}}\ \sim \ t_{n-2},}

where

sβ^=1n2i=1nε^i2i=1n(xix¯)2{\displaystyle s_{\widehat {\beta }}={\sqrt {\frac {{\frac {1}{n-2}}\sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}^{\,2}}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}}}

is the unbiasedstandard error estimator of the estimatorβ^{\displaystyle {\widehat {\beta }}}.

Thist-value has aStudent'st-distribution withn − 2 degrees of freedom. Using it we can construct a confidence interval forβ:

β[β^sβ^tn2, β^+sβ^tn2],{\displaystyle \beta \in \left[{\widehat {\beta }}-s_{\widehat {\beta }}t_{n-2}^{*},\ {\widehat {\beta }}+s_{\widehat {\beta }}t_{n-2}^{*}\right],}

at confidence level(1 −γ), wheretn2{\displaystyle t_{n-2}^{*}} is the(1γ2)-th{\displaystyle \scriptstyle \left(1\;-\;{\frac {\gamma }{2}}\right){\text{-th}}} quantile of thetn−2 distribution. For example, ifγ = 0.05 then the confidence level is 95%.

Similarly, the confidence interval for the intercept coefficientα is given by

α[α^sα^tn2, α^+sα^tn2],{\displaystyle \alpha \in \left[{\widehat {\alpha }}-s_{\widehat {\alpha }}t_{n-2}^{*},\ {\widehat {\alpha }}+s_{\widehat {\alpha }}t_{n-2}^{*}\right],}

at confidence level (1 −γ), where

sα^=sβ^1ni=1nxi2=1n(n2)(i=1nε^i2)i=1nxi2i=1n(xix¯)2{\displaystyle s_{\widehat {\alpha }}=s_{\widehat {\beta }}{\sqrt {{\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{2}}}={\sqrt {{\frac {1}{n(n-2)}}\left(\sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}^{\,2}\right){\frac {\sum _{i=1}^{n}x_{i}^{2}}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}}}}

The US "changes in unemployment – GDP growth" regression with the 95% confidence bands.

The confidence intervals forα andβ give us the general idea where these regression coefficients are most likely to be. For example, in theOkun's law regression shown here the point estimates are

α^=0.859,β^=1.817.{\displaystyle {\widehat {\alpha }}=0.859,\qquad {\widehat {\beta }}=-1.817.}

The 95% confidence intervals for these estimates are

α[0.76,0.96],β[2.06,1.58].{\displaystyle \alpha \in \left[\,0.76,0.96\right],\qquad \beta \in \left[-2.06,-1.58\,\right].}

In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown[12] that at confidence level (1 − γ) the confidence band has hyperbolic form given by the equation

(α+βξ)[α^+β^ξ±tn2(1n2ε^i2)(1n+(ξx¯)2(xix¯)2)].{\displaystyle (\alpha +\beta \xi )\in \left[\,{\widehat {\alpha }}+{\widehat {\beta }}\xi \pm t_{n-2}^{*}{\sqrt {\left({\frac {1}{n-2}}\sum {\widehat {\varepsilon }}_{i}^{\,2}\right)\cdot \left({\frac {1}{n}}+{\frac {(\xi -{\bar {x}})^{2}}{\sum (x_{i}-{\bar {x}})^{2}}}\right)}}\,\right].}

When the model assumed the intercept is fixed and equal to 0 (α=0{\displaystyle \alpha =0}), the standard error of the slope turns into:

sβ^=1n1i=1nε^i2i=1nxi2{\displaystyle s_{\widehat {\beta }}={\sqrt {{\frac {1}{n-1}}{\frac {\sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}^{\,2}}{\sum _{i=1}^{n}x_{i}^{2}}}}}}

With:ε^i=yiy^i{\displaystyle {\hat {\varepsilon }}_{i}=y_{i}-{\hat {y}}_{i}}

Asymptotic assumption

[edit]

The alternative second assumption states that when the number of points in the dataset is "large enough", thelaw of large numbers and thecentral limit theorem become applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantilet*n−2 ofStudent'st distribution is replaced with the quantileq* of thestandard normal distribution. Occasionally the fraction1/n−2 is replaced with1/n. Whenn is large such a change does not alter the results appreciably.

Numerical example

[edit]
See also:Ordinary least squares § Example, andLinear least squares § Example

This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although theOLS article argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead.

Height (m),xi1.471.501.521.551.571.601.631.651.681.701.731.751.781.801.83
Mass (kg),yi52.2153.1254.4855.8457.2058.5759.9361.2963.1164.4766.2868.1069.9272.1974.46
i{\displaystyle i}xi{\displaystyle x_{i}}yi{\displaystyle y_{i}}xi2{\displaystyle x_{i}^{2}}xiyi{\displaystyle x_{i}y_{i}}yi2{\displaystyle y_{i}^{2}}
11.4752.212.160976.74872725.8841
21.5053.122.250079.68002821.7344
31.5254.482.310482.80962968.0704
41.5555.842.402586.55203118.1056
51.5757.202.464989.80403271.8400
61.6058.572.560093.71203430.4449
71.6359.932.656997.68593591.6049
81.6561.292.7225101.12853756.4641
91.6863.112.8224106.02483982.8721
101.7064.472.8900109.59904156.3809
111.7366.282.9929114.66444393.0384
121.7568.103.0625119.17504637.6100
131.7869.923.1684124.45764888.8064
141.8072.193.2400129.94205211.3961
151.8374.463.3489136.26185544.2916
Σ{\displaystyle \Sigma }24.76931.1741.05321548.245358498.5439

There aren = 15 points in this data set. Hand calculations would be started by finding the following five sums:

Sx=ixi=24.76,Sy=iyi=931.17,Sxx=ixi2=41.0532,Syy=iyi2=58498.5439,Sxy=ixiyi=1548.2453{\displaystyle {\begin{aligned}S_{x}&=\sum _{i}x_{i}\,=24.76,&\qquad S_{y}&=\sum _{i}y_{i}\,=931.17,\\[5pt]S_{xx}&=\sum _{i}x_{i}^{2}=41.0532,&\;\;\,S_{yy}&=\sum _{i}y_{i}^{2}=58498.5439,\\[5pt]S_{xy}&=\sum _{i}x_{i}y_{i}=1548.2453&\end{aligned}}}

These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors.

β^=nSxySxSynSxxSx2=61.272α^=1nSyβ^1nSx=39.062sε2=1n(n2)[nSyySy2β^2(nSxxSx2)]=0.5762sβ^2=nsε2nSxxSx2=3.1539sα^2=sβ^21nSxx=8.63185{\displaystyle {\begin{aligned}{\widehat {\beta }}&={\frac {nS_{xy}-S_{x}S_{y}}{nS_{xx}-S_{x}^{2}}}=61.272\\[8pt]{\widehat {\alpha }}&={\frac {1}{n}}S_{y}-{\widehat {\beta }}{\frac {1}{n}}S_{x}=-39.062\\[8pt]s_{\varepsilon }^{2}&={\frac {1}{n(n-2)}}\left[nS_{yy}-S_{y}^{2}-{\widehat {\beta }}^{2}(nS_{xx}-S_{x}^{2})\right]=0.5762\\[8pt]s_{\widehat {\beta }}^{2}&={\frac {ns_{\varepsilon }^{2}}{nS_{xx}-S_{x}^{2}}}=3.1539\\[8pt]s_{\widehat {\alpha }}^{2}&=s_{\widehat {\beta }}^{2}{\frac {1}{n}}S_{xx}=8.63185\end{aligned}}}

Graph of points and linear least squares lines in the simple linear regression numerical example

The 0.975 quantile of Student'st-distribution with 13 degrees of freedom ist*13 = 2.1604, and thus the 95% confidence intervals forα andβ are

α[α^t13sα^]=[45.4, 32.7]β[β^t13sβ^]=[57.4, 65.1]{\displaystyle {\begin{aligned}&\alpha \in [\,{\widehat {\alpha }}\mp t_{13}^{*}s_{\widehat {\alpha }}\,]=[\,{-45.4},\ {-32.7}\,]\\[5pt]&\beta \in [\,{\widehat {\beta }}\mp t_{13}^{*}s_{\widehat {\beta }}\,]=[\,57.4,\ 65.1\,]\end{aligned}}}

Theproduct-moment correlation coefficient might also be calculated:

r^=nSxySxSy(nSxxSx2)(nSyySy2)=0.9946{\displaystyle {\widehat {r}}={\frac {nS_{xy}-S_{x}S_{y}}{\sqrt {(nS_{xx}-S_{x}^{2})(nS_{yy}-S_{y}^{2})}}}=0.9946}

Alternatives

[edit]
Calculating the parameters of a linear model by minimizing the squared error.

In SLR, there is an underlying assumption that only the dependent variable contains measurement error; if the explanatory variable is also measured with error, then simple regression is not appropriate for estimating the underlying relationship because it will be biased due toregression dilution.

Other estimation methods that can be used in place of ordinary least squares includeleast absolute deviations (minimizing the sum of absolute values of residuals) and theTheil–Sen estimator (which chooses a line whoseslope is themedian of the slopes determined by pairs of sample points).

Deming regression (total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit. can lead to a model that attempts to fit the outliers more than the data.

Line fitting

[edit]
This section is an excerpt fromLine fitting.[edit]
This page is aprimary topic and an article should be written about it. One or more editors believe it holds the title of abroad-concept article. The article may be written here ordrafted elsewhere first. Related titles should be described here, while unrelated titles should be moved toSimple linear regression (disambiguation). Relevant discussion may be found on thetalk page.(May 2019)

Line fitting is the process of constructing astraight line that has the best fit to a series of data points.

Several methods exist, considering:

Simple linear regression without the intercept term (single regressor)

[edit]

Sometimes it is appropriate to force the regression line to pass through the origin, becausex andy are assumed to be proportional. For the model without the intercept term,y =βx, the OLS estimator forβ simplifies to

β^=i=1nxiyii=1nxi2=xy¯x2¯{\displaystyle {\widehat {\beta }}={\frac {\sum _{i=1}^{n}x_{i}y_{i}}{\sum _{i=1}^{n}x_{i}^{2}}}={\frac {\overline {xy}}{\overline {x^{2}}}}}

Substituting(xh,yk) in place of(x,y) gives the regression through(h,k):

β^=i=1n(xih)(yik)i=1n(xih)2=(xh)(yk)¯(xh)2¯=xy¯kx¯hy¯+hkx2¯2hx¯+h2=xy¯x¯y¯+(x¯h)(y¯k)x2¯x¯2+(x¯h)2=Cov(x,y)+(x¯h)(y¯k)Var(x)+(x¯h)2,{\displaystyle {\begin{aligned}{\widehat {\beta }}&={\frac {\sum _{i=1}^{n}(x_{i}-h)(y_{i}-k)}{\sum _{i=1}^{n}(x_{i}-h)^{2}}}={\frac {\overline {(x-h)(y-k)}}{\overline {(x-h)^{2}}}}\\[6pt]&={\frac {{\overline {xy}}-k{\bar {x}}-h{\bar {y}}+hk}{{\overline {x^{2}}}-2h{\bar {x}}+h^{2}}}\\[6pt]&={\frac {{\overline {xy}}-{\bar {x}}{\bar {y}}+({\bar {x}}-h)({\bar {y}}-k)}{{\overline {x^{2}}}-{\bar {x}}^{2}+({\bar {x}}-h)^{2}}}\\[6pt]&={\frac {\operatorname {Cov} (x,y)+({\bar {x}}-h)({\bar {y}}-k)}{\operatorname {Var} (x)+({\bar {x}}-h)^{2}}},\end{aligned}}}

where Cov and Var refer to the covariance and variance of the sample data (uncorrected for bias).The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope.

See also

[edit]

References

[edit]
  1. ^Seltman, Howard J. (2008-09-08).Experimental Design and Analysis(PDF). p. 227.
  2. ^"Statistical Sampling and Regression: Simple Linear Regression". Columbia University. Retrieved2016-10-17.When one independent variable is used in a regression, it is called a simple regression;(...)
  3. ^Lane, David M.Introduction to Statistics(PDF). p. 462.
  4. ^Zou KH; Tuncali K; Silverman SG (2003)."Correlation and simple linear regression".Radiology.227 (3):617–22.doi:10.1148/radiol.2273011499.ISSN 0033-8419.OCLC 110941167.PMID 12773666.
  5. ^Altman, Naomi; Krzywinski, Martin (2015)."Simple linear regression".Nature Methods.12 (11):999–1000.doi:10.1038/nmeth.3627.ISSN 1548-7091.OCLC 5912005539.PMID 26824102.S2CID 261269711.
  6. ^Kenney, J. F. and Keeping, E. S. (1962) "Linear Regression and Correlation." Ch. 15 inMathematics of Statistics, Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252–285
  7. ^abMuthukrishnan, Gowri (17 Jun 2018)."Maths behind Polynomial regression, Muthukrishnan".Maths behind Polynomial regression. Retrieved30 Jan 2024.
  8. ^"Mathematics of Polynomial Regression".Polynomial Regression, A PHP regression class.
  9. ^"Numeracy, Maths and Statistics - Academic Skills Kit, Newcastle University".Simple Linear Regression. Retrieved30 Jan 2024.
  10. ^Valliant, Richard, Jill A. Dever, and Frauke Kreuter. Practical tools for designing and weighting survey samples. New York: Springer, 2013.
  11. ^Draper, N. R.; Smith, H. (1998).Applied Regression Analysis (3rd ed.). John Wiley.ISBN 0-471-17082-8.
  12. ^Casella, G. and Berger, R. L. (2002), "Statistical Inference" (2nd Edition), Cengage,ISBN 978-0-534-24312-8, pp. 558–559.

External links

[edit]
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Quantitativeforecasting methods
Retrieved from "https://en.wikipedia.org/w/index.php?title=Simple_linear_regression&oldid=1336583492"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp