Instatistics, theBreusch–Godfrey test is used to assess the validity of some of the modelling assumptions inherent in applyingregression-like models to observed data series.[1][2] In particular, ittests for the presence ofserial correlation that has not been included in a proposed model structure and which, if present, would mean that incorrect conclusions would be drawn from other tests or that sub-optimal estimates of model parameters would be obtained.
The regression models to which the test can be applied include cases where lagged values of thedependent variables are used asindependent variables in the model's representation for later observations. This type of structure is common ineconometric models.
The test is named afterTrevor S. Breusch andLeslie G. Godfrey.
The Breusch–Godfrey test is a test forautocorrelation in theerrors in a regression model. It makes use of theresiduals from the model being considered in aregression analysis, and a test statistic is derived from these. Thenull hypothesis is that there is noserial correlation of any order up top.[citation needed]
Because the test is based on the idea ofLagrange multiplier testing, it is sometimes referred to as anLM test for serial correlation.[3]
A similar assessment can be also carried out with theDurbin–Watson test and theLjung–Box test. However, the test is more general than that using the Durbin–Watson statistic (or Durbin'sh statistic), which is only valid for nonstochastic regressors and for testing the possibility of a first-order autoregressive model (e.g. AR(1)) for the regression errors.[citation needed] The BG test has none of these restrictions, and is statistically morepowerful than Durbin'sh statistic.[citation needed]The BG test is considered to be more general than the Ljung-Box test because the latter requires the assumption of strict exogeneity, but the BG test does not. However, the BG test requires the assumptions of stronger forms of predeterminedness and conditionalhomoscedasticity.
Consider alinear regression of any form, for example
where the errors might follow an AR(p) autoregressive scheme, as follows:
The simple regression model is first fitted byordinary least squares to obtain a set of sample residuals.
Breusch and Godfrey[citation needed] proved that, if the following auxiliary regression model is fitted
and if the usualCoefficient of determination ( statistic) is calculated for this model:
where stands for thearithmetic mean of residuals.One may average residuals over the last observations, where is the number of observations in the original model and is the number of error lags used in the auxiliary regression.There is a version of the test where missing residuals are replaced by zeros. In this version of the test the number of observations in the auxiliary regression is equal to the original number of observations.
The followingasymptotic approximation can be used for the distribution of the test statistic:
when the null hypothesis holds (that is, there is no serial correlation of any order up to p).