Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Coefficient of determination

From Wikipedia, the free encyclopedia
Indicator for how well data points fit a line or curve
Not to be confused withCoefficient of variation.
icon
You can helpexpand this article with text translated fromthe corresponding article in German. (September 2019)Click [show] for important translation instructions.
  • View a machine-translated version of the German article.
  • Machine translation, likeDeepL orGoogle Translate, is a useful starting point for translations, but translators must revise errors as necessary and confirm that the translation is accurate, rather than simply copy-pasting machine-translated text into the English Wikipedia.
  • Consideradding a topic to this template: there are already 1,773 articles in themain category, and specifying|topic= will aid in categorization.
  • Do not translate text that appears unreliable or low-quality. If possible, verify the text with references provided in the foreign-language article.
  • Youmust providecopyright attribution in theedit summary accompanying your translation by providing aninterlanguage link to the source of your translation. A model attribution edit summary isContent in this edit is translated from the existing German Wikipedia article at [[:de:Bestimmtheitsmaß]]; see its history for attribution.
  • You may also add the template{{Translated|de|Bestimmtheitsmaß}} to thetalk page.
  • For more guidance, seeWikipedia:Translation.
Ordinary least squares regression ofOkun's law. Since the regression line does not miss any of the points by very much, theR2 of the regression is relatively high.

Instatistics, thecoefficient of determination, denotedR2 orr2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s).It is astatistic used in the context ofstatistical models whose main purpose is either theprediction of future outcomes or the testing ofhypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.[1][2][3]

There are several definitions ofR2 that are only sometimes equivalent. Insimple linear regression (which includes anintercept),r2 is simply the square of the samplecorrelation coefficient (r), between the observed outcomes and the observed predictor values.[4] If additionalregressors are included,R2 is the square of thecoefficient of multiple correlation. In both such cases, the coefficient of determination normally ranges from 0 to 1.

There are cases whereR2 can yield negative values. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. Even if a model-fitting procedure has been used,R2 may still be negative, for example when linear regression is conducted without including an intercept,[5] or when a non-linear function is used to fit the data.[6] In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion.

The coefficient of determination can be more intuitively informative thanMAE,MAPE,MSE, andRMSE inregression analysis evaluation, as the former can be expressed as a percentage, whereas the latter measures have arbitrary ranges. It also proved more robust for poor fits compared toSMAPE on certain test datasets.[7]

When evaluating the goodness-of-fit of simulated (Ypred) versus measured (Yobs) values, it is not appropriate to base this on theR2 of the linear regression (i.e.,Yobs=m·Ypred + b).[citation needed] TheR2 quantifies the degree of any linear correlation betweenYobs andYpred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration:Yobs = 1·Ypred + 0 (i.e., the 1:1 line).[8][9]

Definitions

[edit]
R2=1SSresSStot{\displaystyle R^{2}=1-{\frac {\color {blue}{SS_{\text{res}}}}{\color {red}{SS_{\text{tot}}}}}}
The better the linear regression (on the right) fits the data in comparison to the simple average (on the left graph), the closer the value ofR2 is to 1. The areas of the blue squares represent the squared residuals with respect to the linear regression. The areas of the red squares represent the squared residuals with respect to the average value.

Adata set hasn values markedy1, ...,yn (collectively known asyi or as a vectory = [y1, ...,yn]T), each associated with a fitted (or modeled, or predicted) valuef1, ...,fn (known asfi, or sometimesŷi, as a vectorf).

Define theresiduals asei =yifi (forming a vectore).

Ify¯{\displaystyle {\bar {y}}} is the mean of the observed data:y¯=1ni=1nyi{\displaystyle {\bar {y}}={\frac {1}{n}}\sum _{i=1}^{n}y_{i}}then the variability of the data set can be measured with twosums of squares formulas:

The most general definition of the coefficient of determination isR2=1SSresSStot{\displaystyle R^{2}=1-{SS_{\rm {res}} \over SS_{\rm {tot}}}}

In the best case, the modeled values exactly match the observed values, which results inSSres=0{\displaystyle SS_{\text{res}}=0} andR2 = 1. A baseline model, which always predictsy, will haveR2 = 0.

Relation to unexplained variance

[edit]
Main article:Fraction of variance unexplained

In a general form,R2 can be seen to be related to the fraction of variance unexplained (FVU), since the second term compares the unexplained variance (variance of the model's errors) with the total variance (of the data):R2=1FVU{\displaystyle R^{2}=1-{\text{FVU}}}

As explained variance

[edit]

A larger value ofR2 implies a more successful regression model.[4]: 463 SupposeR2 = 0.49. This implies that 49% of the variability of the dependent variable in the data set has been accounted for, and the remaining 51% of the variability is still unaccounted for. For regression models, the regression sum of squares, also called theexplained sum of squares, is defined as

SSreg=i(fiy¯)2{\displaystyle SS_{\text{reg}}=\sum _{i}(f_{i}-{\bar {y}})^{2}}

In some cases, as insimple linear regression, thetotal sum of squares equals the sum of the two other sums of squares defined above:

SSres+SSreg=SStot{\displaystyle SS_{\text{res}}+SS_{\text{reg}}=SS_{\text{tot}}}

SeePartitioning in the general OLS model for a derivation of this result for one case where the relation holds. When this relation does hold, the above definition ofR2 is equivalent to

R2=SSregSStot=SSreg/nSStot/n{\displaystyle R^{2}={\frac {SS_{\text{reg}}}{SS_{\text{tot}}}}={\frac {SS_{\text{reg}}/n}{SS_{\text{tot}}/n}}}

wheren is the number of observations (cases) on the variables.

In this formR2 is expressed as the ratio of theexplained variance (variance of the model's predictions, which isSSreg /n) to the total variance (sample variance of the dependent variable, which isSStot /n).

This partition of the sum of squares holds for instance when the model valuesƒi have been obtained bylinear regression. A mildersufficient condition reads as follows: The model has the form

fi=α^+β^qi{\displaystyle f_{i}={\widehat {\alpha }}+{\widehat {\beta }}q_{i}}

where theqi are arbitrary values that may or may not depend oni or on other free parameters (the common choiceqi = xi is just one special case), and the coefficient estimatesα^{\displaystyle {\widehat {\alpha }}} andβ^{\displaystyle {\widehat {\beta }}} are obtained by minimizing the residual sum of squares.

This set of conditions is an important one and it has a number of implications for the properties of the fittedresiduals and the modelled values. In particular, under these conditions:

f¯=y¯.{\displaystyle {\bar {f}}={\bar {y}}.\,}

As squared correlation coefficient

[edit]

In linear least squaresmultiple regression (with fitted intercept and slope),R2 equalsρ2(y,f){\displaystyle \rho ^{2}(y,f)} the square of thePearson correlation coefficient between the observedy{\displaystyle y} and modeled (predicted)f{\displaystyle f} data values of the dependent variable.

In alinear least squares regression with a single explanator (with fitted intercept and slope), this is also equal toρ2(y,x){\displaystyle \rho ^{2}(y,x)} the squared Pearson correlation coefficient between the dependent variabley{\displaystyle y} and explanatory variablex{\displaystyle x}.

It should not be confused with the correlation coefficient between twoexplanatory variables, defined as

ρα^,β^=cov(α^,β^)σα^σβ^,{\displaystyle \rho _{{\widehat {\alpha }},{\widehat {\beta }}}={\operatorname {cov} \left({\widehat {\alpha }},{\widehat {\beta }}\right) \over \sigma _{\widehat {\alpha }}\sigma _{\widehat {\beta }}},}

where the covariance between two coefficient estimates, as well as theirstandard deviations, are obtained from thecovariance matrix of the coefficient estimates,(XTX)1{\displaystyle (X^{T}X)^{-1}}.

Under more general modeling conditions, where the predicted values might be generated from a model different from linear least squares regression, anR2 value can be calculated as the square of thecorrelation coefficient between the originaly{\displaystyle y} and modeledf{\displaystyle f} data values. In this case, the value is not directly a measure of how good the modeled values are, but rather a measure of how good a predictor might be constructed from the modeled values (by creating a revised predictor of the formα +βƒi).[citation needed] According to Everitt,[10] this usage is specifically the definition of the term "coefficient of determination": the square of the correlation between two (general) variables.

Interpretation

[edit]

R2 is a measure of thegoodness of fit of a model.[11] In regression, theR2 coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. AnR2 of 1 indicates that the regression predictions perfectly fit the data.

Values ofR2 outside the range 0 to 1 occur when the model fits the data worse than the worst possibleleast-squares predictor (equivalent to a horizontal hyperplane at a height equal to the mean of the observed data). This occurs when a wrong model was chosen, or nonsensical constraints were applied by mistake. If equation 1 of Kvålseth[12] is used (this is the equation used most often),R2 can be less than zero. If equation 2 of Kvålseth is used,R2 can be greater than one.

In all instances whereR2 is used, the predictors are calculated by ordinary least-squares regression: that is, by minimizingSSres. In this case,R2 increases as the number of variables in the model is increased (R2 ismonotone increasing with the number of variables included—it will never decrease). This illustrates a drawback to one possible use ofR2, where one might keep adding variables (kitchen sink regression) to increase theR2 value. For example, if one is trying to predict the sales of a model of car from the car's gas mileage, price, and engine power, one can include probably irrelevant factors such as the first letter of the model's name or the height of the lead engineer designing the car because theR2 will never decrease as variables are added and will likely experience an increase due to chance alone.

This leads to the alternative approach of looking at theadjustedR2. The explanation of this statistic is almost the same asR2 but it penalizes the statistic as extra variables are included in the model. For cases other than fitting by ordinary least squares, theR2 statistic can be calculated as above and may still be a useful measure. If fitting is byweighted least squares orgeneralized least squares, alternative versions ofR2 can be calculated appropriate to those statistical frameworks, while the "raw"R2 may still be useful if it is more easily interpreted. Values forR2 can be calculated for any type of predictive model, which need not have a statistical basis.

In a multiple linear model

[edit]

Consider a linear model withmore than a single explanatory variable, of the form

Yi=β0+j=1pβjXi,j+εi,{\displaystyle Y_{i}=\beta _{0}+\sum _{j=1}^{p}\beta _{j}X_{i,j}+\varepsilon _{i},}

where, for theith case,Yi{\displaystyle {Y_{i}}} is the response variable,Xi,1,,Xi,p{\displaystyle X_{i,1},\dots ,X_{i,p}} arep regressors, andεi{\displaystyle \varepsilon _{i}} is a mean zeroerror term. The quantitiesβ0,,βp{\displaystyle \beta _{0},\dots ,\beta _{p}} are unknown coefficients, whose values are estimated byleast squares. The coefficient of determinationR2 is a measure of the global fit of the model. Specifically,R2 is an element of [0, 1] and represents the proportion of variability inYi that may be attributed to some linear combination of the regressors (explanatory variables) inX.[13]

R2 is often interpreted as the proportion of response variation "explained" by the regressors in the model. Thus,R2 = 1 indicates that the fitted model explains all variability iny{\displaystyle y}, whileR2 = 0 indicates no 'linear' relationship (for straight line regression, this means that the straight line model is a constant line (slope = 0, intercept = y¯{\displaystyle {\bar {y}}}) between the response variable and regressors). An interior value such asR2 = 0.7 may be interpreted as follows: "Seventy percent of the variance in the response variable can be explained by the explanatory variables. The remaining thirty percent can be attributed to unknown,lurking variables or inherent variability."

A caution that applies toR2, as to other statistical descriptions ofcorrelation and association is that "correlation does not imply causation". In other words, while correlations may sometimes provide valuable clues in uncovering causal relationships among variables, a non-zero estimated correlation between two variables is not, on its own, evidence that changing the value of one variable would result in changes in the values of other variables. For example, the practice of carrying matches (or a lighter) is correlated with incidence of lung cancer, but carrying matches does not cause cancer (in the standard sense of "cause").

In case of a single regressor, fitted by least squares,R2 is the square of thePearson product-moment correlation coefficient relating the regressor and the response variable. More generally,R2 is the square of the correlation between the constructed predictor and the response variable. With more than one regressor, theR2 can be referred to as thecoefficient of multiple determination.

Inflation ofR2

[edit]

Inleast squares regression using typical data,R2 is at least weakly increasing with an increase in number of regressors in the model. Because increases in the number of regressors increase the value ofR2,R2 alone cannot be used as a meaningful comparison of models with very different numbers of independent variables. For a meaningful comparison between two models, anF-test can be performed on theresidual sum of squares[citation needed], similar to the F-tests inGranger causality, though this is not always appropriate[further explanation needed]. As a reminder of this, some authors denoteR2 byRq2, whereq is the number of columns inX (the number of explanators including the constant).

To demonstrate this property, first recall that the objective of least squares linear regression is

minbSSres(b)minbi(yiXib)2{\displaystyle \min _{b}SS_{\text{res}}(b)\Rightarrow \min _{b}\sum _{i}(y_{i}-X_{i}b)^{2}\,}

whereXi is a row vector of values of explanatory variables for casei andb is a column vector of coefficients of the respective elements ofXi.

The optimal value of the objective is weakly smaller as more explanatory variables are added and hence additional columns ofX{\displaystyle X} (the explanatory data matrix whoseith row isXi) are added, by the fact that less constrained minimization leads to an optimal cost which is weakly smaller than more constrained minimization does. Given the previous conclusion and noting thatSStot{\displaystyle SS_{tot}} depends only ony, the non-decreasing property ofR2 follows directly from the definition above.

The intuitive reason that using an additional explanatory variable cannot lower theR2 is this: MinimizingSSres{\displaystyle SS_{\text{res}}} is equivalent to maximizingR2. When the extra variable is included, the data always have the option of giving it an estimated coefficient of zero, leaving the predicted values and theR2 unchanged. The only way that the optimization problem will give a non-zero coefficient is if doing so improves the R2.

The above gives an analytical explanation of the inflation ofR2. Next, an example based on ordinary least square from a geometric perspective is shown below.[14]

This is an example of residuals of regression models in smaller and larger spaces based on ordinary least square regression.

A simple case to be considered first:

Y=β0+β1X1+ε{\displaystyle Y=\beta _{0}+\beta _{1}\cdot X_{1}+\varepsilon \,}

This equation describes theordinary least squares regression model with one regressor. The prediction is shown as the red vector in the figure on the right. Geometrically, it is the projection of true value onto a model space inR{\displaystyle \mathbb {R} } (without intercept). The residual is shown as the red line.

Y=β0+β1X1+β2X2+ε{\displaystyle Y=\beta _{0}+\beta _{1}\cdot X_{1}+\beta _{2}\cdot X_{2}+\varepsilon \,}

This equation corresponds to the ordinary least squares regression model with two regressors. The prediction is shown as the blue vector in the figure on the right. Geometrically, it is the projection of true value onto a larger model space inR2{\displaystyle \mathbb {R} ^{2}} (without intercept). Noticeably, the values ofβ0{\displaystyle \beta _{0}} andβ0{\displaystyle \beta _{0}} are not the same as in the equation for smaller model space as long asX1{\displaystyle X_{1}} andX2{\displaystyle X_{2}} are not zero vectors. Therefore, the equations are expected to yield different predictions (i.e., the blue vector is expected to be different from the red vector). The least squares regression criterion ensures that the residual is minimized. In the figure, the blue line representing the residual is orthogonal to the model space inR2{\displaystyle \mathbb {R} ^{2}}, giving the minimal distance from the space.

The smaller model space is a subspace of the larger one, and thereby the residual of the smaller model is guaranteed to be larger. Comparing the red and blue lines in the figure, the blue line is orthogonal to the space, and any other line would be larger than the blue one. Considering the calculation forR2, a smaller value ofSStot{\displaystyle SS_{tot}} will lead to a larger value ofR2, meaning that adding regressors will result in inflation ofR2.

Caveats

[edit]

R2 does not indicate whether:

  • the independent variables are a cause of the changes in thedependent variable;
  • omitted-variable bias exists;
  • the correctregression was used;
  • the most appropriate set of independent variables has been chosen;
  • there iscollinearity present in the data on the explanatory variables;
  • the model might be improved by using transformed versions of the existing set of independent variables;
  • there are enough data points to make a solid conclusion;
  • there are a fewoutliers in an otherwise good sample.
Comparison of theTheil–Sen estimator (black) andsimple linear regression (blue) for a set of points withoutliers. Because of the many outliers, neither of the regression lines fits the data well, as measured by the fact that neither gives a very highR2.

Extensions

[edit]

AdjustedR2

[edit]
See also:Omega-squared (ω2)

The use of an adjustedR2 (one common notation isR¯2{\displaystyle {\bar {R}}^{2}}, pronounced "R bar squared"; another isRa2{\displaystyle R_{\text{a}}^{2}} orRadj2{\displaystyle R_{\text{adj}}^{2}}) is an attempt to account for the phenomenon of theR2 automatically increasing when extra explanatory variables are added to the model. There are many different ways of adjusting.[15] By far the most used one, to the point that it is typically just referred to as adjustedR, is the correction proposed byMordecai Ezekiel.[15][16][17] The adjustedR2 is defined as

R¯2=1SSres/dfresSStot/dftot{\displaystyle {\bar {R}}^{2}={1-{SS_{\text{res}}/{\text{df}}_{\text{res}} \over SS_{\text{tot}}/{\text{df}}_{\text{tot}}}}}

where dfres is thedegrees of freedom of the estimate of the population variance around the model, and dftot is the degrees of freedom of the estimate of the population variance around the mean. dfres is given in terms of the sample sizen and the number of variablesp in the model,dfres =np − 1. dftot is given in the same way, but withp being zero for the mean (i.e.,dftot =n − 1).

Inserting the degrees of freedom and using the definition ofR2, it can be rewritten as:

R¯2=1(1R2)n1np1{\displaystyle {\bar {R}}^{2}=1-(1-R^{2}){n-1 \over n-p-1}}

wherep is the total number of explanatory variables in the model (excluding the intercept), andn is the sample size.

The adjustedR2 can be negative, and its value will always be less than or equal to that ofR2. UnlikeR2, the adjustedR2 increases only when the increase inR2 (due to the inclusion of a new explanatory variable) is more than one would expect to see by chance. If a set of explanatory variables with a predetermined hierarchy of importance are introduced into a regression one at a time, with the adjustedR2 computed each time, the level at which adjustedR2 reaches a maximum, and decreases afterward, would be the regression with the ideal combination of having the best fit without excess/unnecessary terms.

Schematic of the bias and variance contribution into the total error

The adjustedR2 can be interpreted as an instance of thebias-variance tradeoff. When we consider the performance of a model, a lower error represents a better performance. When the model becomes more complex, the variance will increase whereas the square of bias will decrease, and these two metrics add up to be the total error. Combining these two trends, the bias-variance tradeoff describes a relationship between the performance of the model and its complexity, which is shown as a u-shape curve on the right. For the adjustedR2 specifically, the model complexity (i.e. number of parameters) affects theR2 and the term / frac and thereby captures their attributes in the overall performance of the model.

R2 can be interpreted as the variance of the model, which is influenced by the model complexity. A highR2 indicates a lower bias error because the model can better explain the change of Y with predictors. For this reason, we make fewer (erroneous) assumptions, and this results in a lower bias error. Meanwhile, to accommodate fewer assumptions, the model tends to be more complex. Based on bias-variance tradeoff, a higher complexity will lead to a decrease in bias and a better performance (below the optimal line). InR2, the term (1 −R2) will be lower with high complexity and resulting in a higherR2, consistently indicating a better performance.

On the other hand, the term/frac term is reversely affected by the model complexity. The term/frac will increase when adding regressors (i.e., increased model complexity) and lead to worse performance. Based on bias-variance tradeoff, a higher model complexity (beyond the optimal line) leads to increasing errors and a worse performance.

Considering the calculation ofR2, more parameters will increase theR2 and lead to an increase inR2. Nevertheless, adding more parameters will increase the term/frac and thus decreaseR2. These two trends construct a reverse u-shape relationship between model complexity andR2, which is in consistent with the u-shape trend of model complexity versus overall performance. UnlikeR2, which will always increase when model complexity increases,R2 will increase only when the bias eliminated by the added regressor is greater than the variance introduced simultaneously. UsingR2 instead ofR2 could thereby prevent overfitting.

Following the same logic, adjustedR2 can be interpreted as a less biased estimator of the populationR2, whereas the observed sampleR2 is a positively biased estimate of the population value.[18] AdjustedR2 is more appropriate when evaluating model fit (the variance in the dependent variable accounted for by the independent variables) and in comparing alternative models in thefeature selection stage of model building.[18]

The principle behind the adjustedR2 statistic can be seen by rewriting the ordinaryR2 as

R2=1VARresVARtot{\displaystyle R^{2}={1-{{\text{VAR}}_{\text{res}} \over {\text{VAR}}_{\text{tot}}}}}

whereVARres=SSres/n{\displaystyle {\text{VAR}}_{\text{res}}=SS_{\text{res}}/n} andVARtot=SStot/n{\displaystyle {\text{VAR}}_{\text{tot}}=SS_{\text{tot}}/n} are the sample variances of the estimated residuals and the dependent variable respectively, which can be seen as biased estimates of the population variances of the errors and of the dependent variable. These estimates are replaced by statisticallyunbiased versions:VARres=SSres/(np){\displaystyle {\text{VAR}}_{\text{res}}=SS_{\text{res}}/(n-p)} andVARtot=SStot/(n1){\displaystyle {\text{VAR}}_{\text{tot}}=SS_{\text{tot}}/(n-1)}.

Despite using unbiased estimators for the population variances of the error and the dependent variable, adjustedR2 is not an unbiased estimator of the populationR2,[18] which results by using the population variances of the errors and the dependent variable instead of estimating them.Ingram Olkin andJohn W. Pratt derived theminimum-variance unbiased estimator for the populationR2,[19] which is known as Olkin–Pratt estimator. Comparisons of different approaches for adjustingR2 concluded that in most situations either an approximate version of the Olkin–Pratt estimator[18] or the exact Olkin–Pratt estimator[20] should be preferred over (Ezekiel) adjustedR2.

Coefficient of partial determination

[edit]
See also:Partial correlation

The coefficient of partial determination can be defined as the proportion of variation that cannot be explained in a reduced model, but can be explained by the predictors specified in a full model.[21][22][23] This coefficient is used to provide insight into whether or not one or more additional predictors may be useful in a more fully specified regression model.

The calculation for the partialR2 is relatively straightforward after estimating two models and generating theANOVA tables for them. The calculation for the partialR2 is

SS res, reducedSS res, fullSS res, reduced,{\displaystyle {\frac {SS_{\text{ res, reduced}}-SS_{\text{ res, full}}}{SS_{\text{ res, reduced}}}},}

which is analogous to the usual coefficient of determination:

SStotSSresSStot.{\displaystyle {\frac {SS_{\text{tot}}-SS_{\text{res}}}{SS_{\text{tot}}}}.}

Generalizing and decomposingR2

[edit]

As explained above, model selection heuristics such as the adjustedR2 criterion and theF-test examine whether the totalR2 sufficiently increases to determine if a new regressor should be added to the model. If a regressor is added to the model that is highly correlated with other regressors which have already been included, then the totalR2 will hardly increase, even if the new regressor is of relevance. As a result, the above-mentioned heuristics will ignore relevant regressors when cross-correlations are high.[24]

Geometric representation ofr2.

Alternatively, one can decompose a generalized version ofR2 to quantify the relevance of deviating from a hypothesis.[24] As Hoornweg (2018) shows, severalshrinkage estimators – such asBayesian linear regression,ridge regression, and the (adaptive)lasso – make use of this decomposition ofR2 when they gradually shrink parameters from the unrestricted OLS solutions towards the hypothesized values. Let us first define the linear regression model as

y=Xβ+ε.{\displaystyle y=X\beta +\varepsilon .}

It is assumed that the matrixX is standardized with Z-scores and that the column vectory{\displaystyle y} is centered to have a mean of zero. Let the column vectorβ0{\displaystyle \beta _{0}} refer to the hypothesized regression parameters and let the column vectorb{\displaystyle b} denote the estimated parameters. We can then define

R2=1(yXb)(yXb)(yXβ0)(yXβ0).{\displaystyle R^{2}=1-{\frac {(y-Xb)'(y-Xb)}{(y-X\beta _{0})'(y-X\beta _{0})}}.}

AnR2 of 75% means that the in-sample accuracy improves by 75% if the data-optimizedb solutions are used instead of the hypothesizedβ0{\displaystyle \beta _{0}} values. In the special case thatβ0{\displaystyle \beta _{0}} is a vector of zeros, we obtain the traditionalR2 again.

The individual effect onR2 of deviating from a hypothesis can be computed withR{\displaystyle R^{\otimes }} ('R-outer'). Thisp{\displaystyle p} timesp{\displaystyle p} matrix is given by

R=(Xy~0)(Xy~0)(XX)1(y~0y~0)1,{\displaystyle R^{\otimes }=(X'{\tilde {y}}_{0})(X'{\tilde {y}}_{0})'(X'X)^{-1}({\tilde {y}}_{0}'{\tilde {y}}_{0})^{-1},}

wherey~0=yXβ0{\displaystyle {\tilde {y}}_{0}=y-X\beta _{0}}. The diagonal elements ofR{\displaystyle R^{\otimes }} exactly add up toR2. If regressors are uncorrelated andβ0{\displaystyle \beta _{0}} is a vector of zeros, then thejth{\displaystyle j^{\text{th}}} diagonal element ofR{\displaystyle R^{\otimes }} simply corresponds to ther2 value betweenxj{\displaystyle x_{j}} andy{\displaystyle y}. When regressorsxi{\displaystyle x_{i}} andxj{\displaystyle x_{j}} are correlated,Rii{\displaystyle R_{ii}^{\otimes }} might increase at the cost of a decrease inRjj{\displaystyle R_{jj}^{\otimes }}. As a result, the diagonal elements ofR{\displaystyle R^{\otimes }} may be smaller than 0 and, in more exceptional cases, larger than 1. To deal with such uncertainties, several shrinkage estimators implicitly take a weighted average of the diagonal elements ofR{\displaystyle R^{\otimes }} to quantify the relevance of deviating from a hypothesized value.[24] Click on thelasso for an example.

R2 in logistic regression

[edit]

In the case oflogistic regression, usually fit bymaximum likelihood, there are several choices ofpseudo-R2.

One is the generalizedR2 originally proposed by Cox & Snell,[25] and independently by Magee:[26]

R2=1(L(0)L(θ^))2/n{\displaystyle R^{2}=1-\left({{\mathcal {L}}(0) \over {\mathcal {L}}({\widehat {\theta }})}\right)^{2/n}}

whereL(0){\displaystyle {\mathcal {L}}(0)} is the likelihood of the model with only the intercept,L(θ^){\displaystyle {{\mathcal {L}}({\widehat {\theta }})}} is the likelihood of the estimated model (i.e., the model with a given set of parameter estimates) andn is the sample size. It is easily rewritten to:

R2=1e2n(ln(L(0))ln(L(θ^)))=1eD/n{\displaystyle R^{2}=1-e^{{\frac {2}{n}}(\ln({\mathcal {L}}(0))-\ln({\mathcal {L}}({\widehat {\theta }})))}=1-e^{-D/n}}

whereD is the test statistic of thelikelihood ratio test.

Nico Nagelkerke noted that it had the following properties:[27][22]

  1. It is consistent with the classical coefficient of determination when both can be computed;
  2. Its value is maximised by the maximum likelihood estimation of a model;
  3. It is asymptotically independent of the sample size;
  4. The interpretation is the proportion of the variation explained by the model;
  5. The values are between 0 and 1, with 0 denoting that model does not explain any variation and 1 denoting that it perfectly explains the observed variation;
  6. It does not have any unit.

However, in the case of a logistic model, whereL(θ^){\displaystyle {\mathcal {L}}({\widehat {\theta }})} cannot be greater than 1,R2 is between 0 andRmax2=1(L(0))2/n{\displaystyle R_{\max }^{2}=1-({\mathcal {L}}(0))^{2/n}}: thus, Nagelkerke suggested the possibility to define a scaledR2 asR2/R2max.[22]

Comparison with residual statistics

[edit]

Occasionally, residual statistics are used for indicating goodness of fit. Thenorm of residuals is calculated as the square-root of thesum of squares of residuals (SSR):

norm of residuals=SSres=e.{\displaystyle {\text{norm of residuals}}={\sqrt {SS_{\text{res}}}}=\|e\|.}

Similarly, thereduced chi-square is calculated as the SSR divided by the degrees of freedom.

BothR2 and the norm of residuals have their relative merits. Forleast squares analysisR2 varies between 0 and 1, with larger numbers indicating better fits and 1 representing a perfect fit. The norm of residuals varies from 0 to infinity with smaller numbers indicating better fits and zero indicating a perfect fit. One advantage and disadvantage ofR2 is theSStot{\displaystyle SS_{\text{tot}}} term acts tonormalize the value. If theyi values are all multiplied by a constant, the norm of residuals will also change by that constant butR2 will stay the same. As a basic example, for the linear least squares fit to the set of data:

x12345
y1.93.75.88.09.6

R2 = 0.998, and norm of residuals = 0.302.If all values ofy are multiplied by 1000 (for example, in anSI prefix change), thenR2 remains the same, but norm of residuals = 302.

Another single-parameter indicator of fit is theRMSE of the residuals, or standard deviation of the residuals. This would have a value of 0.135 for the above example given that the fit was linear with an unforced intercept.[28]

History

[edit]

The creation of the coefficient of determination has been attributed to the geneticistSewall Wright and was first published in 1921.[29]

See also

[edit]

Notes

[edit]
  1. ^Steel, R. G. D.; Torrie, J. H. (1960).Principles and Procedures of Statistics with Special Reference to the Biological Sciences.McGraw Hill.
  2. ^Glantz, Stanton A.; Slinker, B. K. (1990).Primer of Applied Regression and Analysis of Variance. McGraw-Hill.ISBN 978-0-07-023407-9.
  3. ^Draper, N. R.; Smith, H. (1998).Applied Regression Analysis. Wiley-Interscience.ISBN 978-0-471-17082-2.
  4. ^abDevore, Jay L. (2011).Probability and Statistics for Engineering and the Sciences (8th ed.). Boston, MA: Cengage Learning. pp. 508–510.ISBN 978-0-538-73352-6.
  5. ^Barten, Anton P. (1987). "The Coeffecient of Determination for Regression without a Constant Term". In Heijmans, Risto; Neudecker, Heinz (eds.).The Practice of Econometrics. Dordrecht: Kluwer. pp. 181–189.ISBN 90-247-3502-5.
  6. ^Colin Cameron, A.; Windmeijer, Frank A.G. (1997). "An R-squared measure of goodness of fit for some common nonlinear regression models".Journal of Econometrics.77 (2):1790–2.doi:10.1016/S0304-4076(96)01818-0.
  7. ^Chicco, Davide; Warrens, Matthijs J.; Jurman, Giuseppe (2021)."The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation".PeerJ Computer Science.7 (e623): e623.doi:10.7717/peerj-cs.623.PMC 8279135.PMID 34307865.
  8. ^Legates, D.R.; McCabe, G.J. (1999). "Evaluating the use of "goodness-of-fit" measures in hydrologic and hydroclimatic model validation".Water Resour. Res.35 (1):233–241.Bibcode:1999WRR....35..233L.doi:10.1029/1998WR900018.S2CID 128417849.
  9. ^Ritter, A.; Muñoz-Carpena, R. (2013). "Performance evaluation of hydrological models: statistical significance for reducing subjectivity in goodness-of-fit assessments".Journal of Hydrology.480 (1):33–45.Bibcode:2013JHyd..480...33R.doi:10.1016/j.jhydrol.2012.12.004.
  10. ^Everitt, B. S. (2002).Cambridge Dictionary of Statistics (2nd ed.). CUP. p. 78.ISBN 978-0-521-81099-9.
  11. ^Casella, Georges (2002).Statistical inference (Second ed.). Pacific Grove, Calif.: Duxbury/Thomson Learning. p. 556.ISBN 9788131503942.
  12. ^Kvalseth, Tarald O. (1985). "Cautionary Note about R2".The American Statistician.39 (4):279–285.doi:10.2307/2683704.JSTOR 2683704.
  13. ^"Linear Regression – MATLAB & Simulink".www.mathworks.com.
  14. ^Faraway, Julian James (2005).Linear models with R(PDF). Chapman & Hall/CRC.ISBN 9781584884255.
  15. ^abRaju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F. (1997)."Methodology review: Estimation of population validity and cross-validity, and the use of equal weights in prediction".Applied Psychological Measurement.21 (4):291–305.doi:10.1177/01466216970214001.ISSN 0146-6216.S2CID 122308344.
  16. ^Mordecai Ezekiel (1930),Methods Of Correlation Analysis,Wiley,Wikidata Q120123877, pp. 208–211.
  17. ^Yin, Ping; Fan, Xitao (January 2001)."EstimatingR2 Shrinkage in Multiple Regression: A Comparison of Different Analytical Methods"(PDF).The Journal of Experimental Education.69 (2):203–224.doi:10.1080/00220970109600656.ISSN 0022-0973.S2CID 121614674.
  18. ^abcdShieh, Gwowen (2008-04-01). "Improved shrinkage estimation of squared multiple correlation coefficient and squared cross-validity coefficient".Organizational Research Methods.11 (2):387–407.doi:10.1177/1094428106292901.ISSN 1094-4281.S2CID 55098407.
  19. ^Olkin, Ingram; Pratt, John W. (March 1958)."Unbiased estimation of certain correlation coefficients".The Annals of Mathematical Statistics.29 (1):201–211.doi:10.1214/aoms/1177706717.ISSN 0003-4851.
  20. ^Karch, Julian (2020-09-29)."Improving on Adjusted R-Squared".Collabra: Psychology.6 (45).doi:10.1525/collabra.343.hdl:1887/3161248.ISSN 2474-7394.
  21. ^Richard Anderson-Sprecher, "Model Comparisons andR2",The American Statistician, Volume 48, Issue 2, 1994, pp. 113–117.
  22. ^abcNagelkerke, N. J. D. (September 1991)."A Note on a General Definition of the Coefficient of Determination"(PDF).Biometrika.78 (3):691–692.doi:10.1093/biomet/78.3.691.JSTOR 2337038.
  23. ^"regression – R implementation of coefficient of partial determination".Cross Validated.
  24. ^abcHoornweg, Victor (2018)."Part II: On Keeping Parameters Fixed".Science: Under Submission. Hoornweg Press.ISBN 978-90-829188-0-9.
  25. ^Cox, D. D.;Snell, E. J. (1989).The Analysis of Binary Data (2nd ed.). Chapman and Hall.
  26. ^Magee, L. (1990). "R2 measures based on Wald and likelihood ratio joint significance tests".The American Statistician.44 (3):250–3.doi:10.1080/00031305.1990.10475731.
  27. ^Nagelkerke, Nico J. D. (1992).Maximum Likelihood Estimation of Functional Relationships, Pays-Bas. Lecture Notes in Statistics. Vol. 69.ISBN 978-0-387-97721-8.
  28. ^OriginLab webpage,http://www.originlab.com/doc/Origin-Help/LR-Algorithm. Retrieved February 9, 2016.
  29. ^Wright, Sewall (January 1921). "Correlation and causation".Journal of Agricultural Research.20:557–585.

Further reading

[edit]
Machine learning evaluation metrics
Regression
Classification
Clustering
Ranking
Computer vision
NLP
Deep learning
Recommender system
Similarity
Authority control databasesEdit this at Wikidata
Retrieved from "https://en.wikipedia.org/w/index.php?title=Coefficient_of_determination&oldid=1321106454"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp