Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Durbin–Watson statistic

From Wikipedia, the free encyclopedia
Test statistic
This article includes a list ofgeneral references, butit lacks sufficient correspondinginline citations. Please help toimprove this article byintroducing more precise citations.(December 2012) (Learn how and when to remove this message)

Instatistics, theDurbin–Watson statistic is atest statistic used to detect the presence ofautocorrelation at lag 1 in theresiduals (prediction errors) from aregression analysis. It is named afterJames Durbin andGeoffrey Watson. Thesmall sample distribution of this ratio was derived byJohn von Neumann (von Neumann, 1941). Durbin and Watson (1950, 1951) applied this statistic to the residuals fromleast squares regressions, and developed bounds tests for thenull hypothesis that the errors are serially uncorrelated against the alternative that they follow a first orderautoregressive process. Note that the distribution of this test statistic does not depend on the estimated regression coefficients and the variance of the errors.[1]

A similar assessment can be also carried out with theBreusch–Godfrey test and theLjung–Box test.

Computing and interpreting the Durbin–Watson statistic

[edit]

Ifet{\textstyle e_{t}} is theresidual given byet=ρet1+νt,{\displaystyle e_{t}=\rho e_{t-1}+\nu _{t},} the Durbin-Watsontest statistic is

d=t=2T(etet1)2t=1Tet2,{\displaystyle d={\sum _{t=2}^{T}(e_{t}-e_{t-1})^{2} \over {\sum _{t=1}^{T}e_{t}^{2}}},}

whereT{\textstyle T} is the number of observations. For largeT{\textstyle T},d{\textstyle d} is approximately equal to2(1ρ^){\textstyle 2(1-{\hat {\rho }})}, whereρ^{\displaystyle {\hat {\rho }}} is the sample autocorrelation of the residuals at lag 1.[2]d=2{\textstyle d=2} therefore indicates no autocorrelation. The value ofd{\textstyle d} always lies between0{\textstyle 0} and4{\textstyle 4}. If the Durbin–Watson statistic is substantially less than 2, there is evidence of positive serial correlation. As a rough rule of thumb, if Durbin–Watson is less than 1.0, there may be cause for alarm. Small values ofd{\textstyle d} indicate successive error terms are positively correlated. Ifd>2{\textstyle d>2}, successive error terms are negatively correlated. In regressions, this can imply an underestimation of the level ofstatistical significance.

To test forpositive autocorrelation at significanceα{\textstyle \alpha }, the test statisticd{\textstyle d} is compared to lower and upper critical values (dL,α{\textstyle d_{L,\alpha }} anddU,α{\textstyle d_{U,\alpha }}):

Positive serial correlation is serial correlation in which a positive error for one observation increases the chances of a positive error for another observation.

To test fornegative autocorrelation at significanceα{\textstyle \alpha }, the test statistic(4d){\textstyle (4-d)} is compared to lower and upper critical values (dL,α{\textstyle d_{L,\alpha }} anddU,α{\textstyle d_{U,\alpha }}):

Negative serial correlation implies that a positive error for one observation increases the chance of a negative error for another observation and a negative error for one observation increases the chances of a positive error for another.

The critical values,dL,α{\textstyle d_{L,\alpha }} anddU,α{\textstyle d_{U,\alpha }}, vary by level of significance (α{\textstyle \alpha }) and the degrees of freedom in the regression equation. Their derivation is complex—statisticians typically obtain them from the appendices of statistical texts.

If thedesign matrixX{\displaystyle \mathbf {X} } of the regression is known, exact critical values for the distribution ofd{\displaystyle d} under the null hypothesis of no serial correlation can be calculated. Under the null hypothesisd{\displaystyle d} is distributed as

i=1nkνiξi2i=1nkξi2,{\displaystyle {\frac {\sum _{i=1}^{n-k}\nu _{i}\xi _{i}^{2}}{\sum _{i=1}^{n-k}\xi _{i}^{2}}},}

wheren{\textstyle n} is the number of observations andk{\textstyle k} is number of regression variables; theξi{\displaystyle \xi _{i}} are independent standard normal random variables; and theνi{\displaystyle \nu _{i}} are the nonzero eigenvalues of(IX(XTX)1XT)A,{\displaystyle (\mathbf {I} -\mathbf {X} (\mathbf {X} ^{T}\mathbf {X} )^{-1}\mathbf {X} ^{T})\mathbf {A} ,}whereA{\displaystyle \mathbf {A} } is the matrix that transforms the residuals into thed{\displaystyle d} statistic, i.e.d=eTAe.{\displaystyle d=\mathbf {e} ^{T}\mathbf {A} \mathbf {e} .}[3] A number of computational algorithms for finding percentiles of this distribution are available.[4]

Although serial correlation does not affect the consistency of the estimated regression coefficients, it does affect our ability to conduct valid statistical tests. First, the F-statistic to test for overall significance of the regression may be inflated under positive serial correlation because the mean squared error (MSE) will tend to underestimate the population error variance. Second, positive serial correlation typically causes the ordinary least squares (OLS) standard errors for the regression coefficients to underestimate the true standard errors. As a consequence, if positive serial correlation is present in the regression, standard linear regression analysis will typically lead us to compute artificially small standard errors for the regression coefficient. These small standard errors will cause the estimated t-statistic to be inflated, suggesting significance where perhaps there is none. The inflated t-statistic, may in turn, lead us to incorrectly reject null hypotheses, about population values of the parameters of the regression model more often than we would if the standard errors were correctly estimated.

If the Durbin–Watson statistic indicates the presence of serial correlation of the residuals, this can be remedied by using theCochrane–Orcutt procedure.

The Durbin–Watson statistic, while displayed by many regression analysis programs, is not applicable in certain situations. For instance, when lagged dependent variables are included in the explanatory variables, then it is inappropriate to use this test. Durbin's h-test (see below) or likelihood ratio tests, that are valid in large samples, should be used.

Durbin h-statistic

[edit]

The Durbin–Watson statistic isbiased forautoregressive moving average models, so that autocorrelation is underestimated. But for large samples one can easily compute the unbiasednormally distributed h-statistic:

h=(112d)T1TVar^(β^1),{\displaystyle h=\left(1-{\frac {1}{2}}d\right){\sqrt {\frac {T}{1-T\cdot {\widehat {\operatorname {Var} }}({\widehat {\beta }}_{1}\,)}}},}

using the Durbin–Watson statisticd and the estimated variance

Var^(β^1){\displaystyle {\widehat {\operatorname {Var} }}({\widehat {\beta }}_{1})}

of the regression coefficient of the lagged dependent variable, provided

TVar^(β^1)<1.{\displaystyle T\cdot {\widehat {\operatorname {Var} }}({\widehat {\beta }}_{1})<1.\,}

Implementations in statistics packages

[edit]
  1. R: thedwtest function in the lmtest package,durbinWatsonTest (or dwt for short) function in the car package, andpdwtest andpbnftest for panel models in the plm package.[5]
  2. MATLAB: the dwtest function in the Statistics Toolbox.
  3. Mathematica: the Durbin–Watson (d) statistic is included as an option in the LinearModelFit function.
  4. SAS: Is a standard output when using proc model and is an option (dw) when using proc reg.
  5. EViews: Automatically calculated when using OLS regression
  6. gretl: Automatically calculated when using OLS regression
  7. Stata: the commandestat dwatson, followingregress in time series data.[6] Engle's LM test for autoregressive conditional heteroskedasticity (ARCH), a test for time-dependent volatility, the Breusch–Godfrey test, and Durbin's alternative test for serial correlation are also available. All (except -dwatson-) tests separately for higher-order serial correlations. The Breusch–Godfrey test and Durbin's alternative test also allow regressors that are not strictly exogenous.
  8. Excel: although Microsoft Excel 2007 does not have a specific Durbin–Watson function, thed-statistic may be calculated using=SUMXMY2(x_array,y_array)/SUMSQ(array)
  9. Minitab: the option to report the statistic in the Session window can be found under the "Options" box under Regression and via the "Results" box under General Regression.
  10. Python: a durbin_watson function is included in the statsmodels package (statsmodels.stats.stattools.durbin_watson), but statistical tables for critical values are not available there.
  11. SPSS: Included as an option in the Regression function.
  12. Julia: theDurbinWatsonTest function is available in theHypothesisTests package.[7]

See also

[edit]

References

[edit]
  1. ^Chatterjee, Samprit; Simonoff, Jeffrey (2013).Handbook of Regression Analysis. John Wiley & Sons.ISBN 978-1118532812.
  2. ^Gujarati (2003) p. 469
  3. ^Durbin, J.; Watson, G. S. (1971). "Testing for serial correlation in least squares regression.III".Biometrika.58 (1):1–19.doi:10.2307/2334313.JSTOR 2334313.
  4. ^Farebrother, R. W. (1980). "Algorithm AS 153: Pan's procedure for the tail probabilities of the Durbin-Watson statistic".Journal of the Royal Statistical Society, Series C.29 (2):224–227.
  5. ^Hateka, Neeraj R. (2010)."Tests for Detecting Autocorrelation".Principles of Econometrics: An Introduction (Using R). SAGE Publications. pp. 379–82.ISBN 978-81-321-0660-9.
  6. ^"regress postestimation time series — Postestimation tools for regress with time series"(PDF).Stata Manual.
  7. ^"Time series tests".juliastats.org. Retrieved2020-02-04.

Further reading

[edit]

External links

[edit]
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis (see alsoTemplate:Least squares and regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Durbin–Watson_statistic&oldid=1319499437"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp