Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Homoscedasticity and heteroscedasticity

From Wikipedia, the free encyclopedia
Statistical property
Plot with random data showing homoscedasticity: at each value ofx, they-value of the dots has about the samevariance.
Plot with random data showing heteroscedasticity: The variance of they-values of the dots increases with increasing values ofx.

Instatistics, asequence ofrandom variables ishomoscedastic (/ˌhmskəˈdæstɪk/) if all its random variables have the same finitevariance; this is also known ashomogeneity of variance. The complementary notion is calledheteroscedasticity, also known asheterogeneity of variance.[a] The term originates from theAncient Greekσκεδάννυμιskedánnymi, 'to scatter'.[1][2][3]

Assuming a variable is homoscedastic when in reality it is heteroscedastic (/ˌhɛtərskəˈdæstɪk/) results inunbiased butinefficientpoint estimates and in biased estimates ofstandard errors, and may result in overestimating thegoodness of fit as measured by thePearson coefficient.

The existence of heteroscedasticity is a major concern inregression analysis and theanalysis of variance, as it invalidatesstatistical tests of significance that assume that themodelling errors all have the same variance. While theordinary least squares (OLS) estimator is still unbiased in the presence of heteroscedasticity, it is inefficient and inference based on the assumption of homoskedasticity is misleading. In that case,generalized least squares (GLS) was frequently used in the past.[4][5] Nowadays, standard practice in econometrics is to includeHeteroskedasticity-consistent standard errors instead of using GLS, as GLS can exhibit strong bias in small samples if the actualskedastic function is unknown.[6]

Because heteroscedasticity concernsexpectations of the secondmoment of the errors, its presence is referred to asmisspecification of the second order.[7]

TheeconometricianRobert Engle was awarded the 2003Nobel Memorial Prize for Economics for his studies onregression analysis in the presence of heteroscedasticity, which led to his formulation of theautoregressive conditional heteroscedasticity (ARCH) modeling technique.[8]

Definition

[edit]

Consider thelinear regression equationyi=xiβi+εi, i=1,,N,{\displaystyle y_{i}=x_{i}\beta _{i}+\varepsilon _{i},\ i=1,\ldots ,N,} where the dependent random variableyi{\displaystyle y_{i}} equals the deterministic variablexi{\displaystyle x_{i}} times coefficientβi{\displaystyle \beta _{i}} plus a random disturbance termεi{\displaystyle \varepsilon _{i}} that has mean zero. The disturbances are homoscedastic if the variance ofεi{\displaystyle \varepsilon _{i}} is a constantσ2{\displaystyle \sigma ^{2}}; otherwise, they are heteroscedastic. In particular, the disturbances are heteroscedastic if the variance ofεi{\displaystyle \varepsilon _{i}} depends oni{\displaystyle i} or on the value ofxi{\displaystyle x_{i}}. One way they might be heteroscedastic is ifσi2=xiσ2{\displaystyle \sigma _{i}^{2}=x_{i}\sigma ^{2}} (an example of ascedastic function), so the variance is proportional to the value ofx{\displaystyle x}.

More generally, if the variance-covariance matrix of disturbanceεi{\displaystyle \varepsilon _{i}} acrossi{\displaystyle i} has a nonconstant diagonal, the disturbance is heteroscedastic.[9] The matrices below are covariances when there are just three observations across time. The disturbance in matrix A is homoscedastic; this is the simple case where OLS is the best linear unbiased estimator. The disturbances in matrices B and C are heteroscedastic. In matrix B, the variance is time-varying, increasing steadily across time; in matrix C, the variance depends on the value ofx{\displaystyle x}. The disturbance in matrix D is homoscedastic because the diagonal variances are constant, even though the off-diagonal covariances are non-zero and ordinary least squares is inefficient for a different reason: serial correlation.

A=σ2[100010001]B=σ2[100020003]C=σ2[x1000x2000x3]D=σ2[1ρρ2ρ1ρρ2ρ1]{\displaystyle {\begin{aligned}A&=\sigma ^{2}{\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\\\end{bmatrix}}&B&=\sigma ^{2}{\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\\\end{bmatrix}}&C&=\sigma ^{2}{\begin{bmatrix}x_{1}&0&0\\0&x_{2}&0\\0&0&x_{3}\\\end{bmatrix}}&D&=\sigma ^{2}{\begin{bmatrix}1&\rho &\rho ^{2}\\\rho &1&\rho \\\rho ^{2}&\rho &1\\\end{bmatrix}}\end{aligned}}}

Examples

[edit]

Heteroscedasticity often occurs when there is a large difference among the sizes of the observations.

A classic example of heteroscedasticity is that of income versus expenditure on meals. A wealthy person may eat inexpensive food sometimes and expensive food at other times. A poor person will almost always eat inexpensive food. Therefore, people with higher incomes exhibit greater variability in expenditures on food.

At a rocket launch, an observer measures the distance traveled by the rocket once per second. In the first couple of seconds, the measurements may be accurate to the nearest centimeter. After five minutes, the accuracy of the measurements may be good only to 100 m, because of the increased distance, atmospheric distortion, and a variety of other factors. So the measurements of distance may exhibit heteroscedasticity.

Consequences

[edit]

One of the assumptions of the classical linear regression model is that there is no heteroscedasticity. Breaking this assumption means that theGauss–Markov theorem does not apply, meaning thatOLS estimators are not theBest Linear Unbiased Estimators (BLUE) and their variance is not the lowest of all other unbiased estimators.Heteroscedasticity doesnot cause ordinary least squares coefficient estimates to be biased, although it can cause ordinary least squares estimates of the variance (and, thus, standard errors) of the coefficients to be biased, possibly above or below the true of population variance. Thus, regression analysis using heteroscedastic data will still provide an unbiased estimate for the relationship between the predictor variable and the outcome, but standard errors and therefore inferences obtained from data analysis are suspect. Biased standard errors lead to biased inference, so results of hypothesis tests are possibly wrong. For example, if OLS is performed on a heteroscedastic data set, yielding biased standard error estimation, a researcher might fail to reject a null hypothesis at a givensignificance level, when that null hypothesis was actually uncharacteristic of the actual population (making atype II error).

Under certain assumptions, the OLS estimator has a normalasymptotic distribution when properly normalized and centered (even when the data does not come from anormal distribution). This result is used to justify using a normal distribution, or achi square distribution (depending on how thetest statistic is calculated), when conducting ahypothesis test. This holds even under heteroscedasticity. More precisely, the OLS estimator in the presence of heteroscedasticity is asymptotically normal, when properly normalized and centered, with a variance-covariancematrix that differs from the case of homoscedasticity. In 1980, White proposed aconsistent estimator for the variance-covariance matrix of the asymptotic distribution of the OLS estimator.[2] This validates the use of hypothesis testing using OLS estimators and White's variance-covariance estimator under heteroscedasticity.

Heteroscedasticity is also a major practical issue encountered inANOVA problems.[10]TheF test can still be used in some circumstances.[11]

However, it has been said that students ineconometrics should not overreact to heteroscedasticity.[3] One author wrote, "unequal error variance is worth correcting only when the problem is severe."[12] In addition, another word of caution was in the form, "heteroscedasticity has never been a reason to throw out an otherwise good model."[3][13] With the advent ofheteroscedasticity-consistent standard errors allowing for inference without specifying the conditional second moment of error term, testing conditional homoscedasticity is not as important as in the past.[6]

For any non-linear model (for instanceLogit andProbit models), however, heteroscedasticity has more severe consequences: themaximum likelihood estimates (MLE) of the parameters will usually be biased, as well as inconsistent (unless the likelihood function is modified to correctly take into account the precise form of heteroscedasticity or the distribution is a member of thelinear exponential family and the conditional expectation function is correctly specified).[14][15] Yet, in the context of binary choice models (Logit orProbit), heteroscedasticity will only result in a positive scaling effect on the asymptotic mean of the misspecified MLE (i.e. the model that ignores heteroscedasticity).[16] As a result, the predictions which are based on the misspecified MLE will remain correct. In addition, the misspecified Probit and Logit MLE will be asymptotically normally distributed which allows performing the usual significance tests (with the appropriate variance-covariance matrix). However, regarding the general hypothesis testing, as pointed out byGreene, "simply computing a robust covariance matrix for an otherwise inconsistent estimator does not give it redemption. Consequently, the virtue of a robust covariance matrix in this setting is unclear."[17]

Correction

[edit]

There are several common corrections for heteroscedasticity. They are:

  • A stabilizing transformation of the data, e.g.logarithmized data. Non-logarithmized series that are growing exponentially often appear to have increasing variability as the series rises over time. The variability in percentage terms may, however, be rather stable.
  • Use a different specification for the model (differentX variables, or perhaps non-linear transformations of theX variables).
  • Apply aweighted least squares estimation method, in which OLS is applied to transformed or weighted values ofX andY. The weights vary over observations, usually depending on the changing error variances. In one variation the weights are directly related to the magnitude of the dependent variable, and this corresponds to least squares percentage regression.[18]
  • Heteroscedasticity-consistent standard errors (HCSE), while still biased, improve upon OLS estimates.[2] HCSE is a consistent estimator of standard errors in regression models with heteroscedasticity. This method corrects for heteroscedasticity without altering the values of the coefficients. This method may be superior to regular OLS because if heteroscedasticity is present it corrects for it, however, if the data is homoscedastic, the standard errors are equivalent to conventional standard errors estimated by OLS. Several modifications of the White method of computing heteroscedasticity-consistent standard errors have been proposed as corrections with superior finite sample properties.
  • Wild bootstrapping can be used as aResampling method that respects the differences in the conditional variance of the error term. An alternative is resampling observations instead of errors. Note resampling errors without respect for the affiliated values of the observation enforces homoskedasticity and thus yields incorrect inference.
  • UseMINQUE or even the customary estimatorssi2=(ni1)1j(yijy¯i)2{\textstyle s_{i}^{2}=(n_{i}-1)^{-1}\sum _{j}\left(y_{ij}-{\bar {y}}_{i}\right)^{2}} (fori=1,2,...,k{\displaystyle i=1,2,...,k} independent samples withj=1,2,...,ni{\displaystyle j=1,2,...,n_{i}} observations each), whose efficiency losses are not substantial when the number of observations per sample is large (ni>5{\displaystyle n_{i}>5}), especially for small number of independent samples.[19]

Testing

[edit]
Absolute value of residuals for simulated first order heteroscedastic data

Residuals can be tested for homoscedasticity using theBreusch–Pagan test,[20] which performs an auxiliary regression of the squared residuals on the independent variables. From this auxiliary regression, the explained sum of squares is retained, divided by two, and then becomes the test statistic for a chi-squared distribution with the degrees of freedom equal to the number of independent variables.[21] The null hypothesis of this chi-squared test is homoscedasticity, and the alternative hypothesis would indicate heteroscedasticity. Since the Breusch–Pagan test is sensitive to departures from normality or small sample sizes, the Koenker–Bassett or 'generalized Breusch–Pagan' test is commonly used instead.[22][additional citation(s) needed] From the auxiliary regression, it retains the R-squared value which is then multiplied by the sample size, and then becomes the test statistic for a chi-squared distribution (and uses the same degrees of freedom). Although it is not necessary for the Koenker–Bassett test, the Breusch–Pagan test requires that the squared residuals also be divided by the residual sum of squares divided by the sample size.[22] Testing for groupwise heteroscedasticity can be done with theGoldfeld–Quandt test.[23]

Due to the standard use ofheteroskedasticity-consistent Standard Errors and the problem ofPre-test, econometricians nowadays rarely use tests for conditional heteroskedasticity.[6]

List of tests

[edit]

Although tests for heteroscedasticity between groups can formally be considered as a special case of testing within regression models, some tests have structures specific to this case.

Tests in regression
Tests for grouped data

Generalisations

[edit]

Homoscedastic distributions

[edit]

Two or morenormal distributions,N(μ1,Σ1),N(μ2,Σ2),{\displaystyle N(\mu _{1},\Sigma _{1}),N(\mu _{2},\Sigma _{2}),} are both homoscedastic and lackserial correlation if they share the same diagonals in theircovariance matrix,Σ1ii=Σ2jj, i=j.{\displaystyle \Sigma _{1}{ii}=\Sigma _{2}{jj},\ \forall i=j.} and their non-diagonal entries are zero. Homoscedastic distributions are especially useful to derive statisticalpattern recognition andmachine learning algorithms. One popular example of an algorithm that assumes homoscedasticity is Fisher'slinear discriminant analysis.The concept of homoscedasticity can be applied to distributions on spheres.[27]

Multivariate data

[edit]

The study of homescedasticity and heteroscedasticity has been generalized to the multivariate case, which deals with the covariances of vector observations instead of the variance of scalar observations. One version of this is to use covariance matrices as the multivariate measure of dispersion. Several authors have considered tests in this context, for both regression and grouped-data situations.[28][29]Bartlett's test for heteroscedasticity between grouped data, used most commonly in the univariate case, has also been extended for the multivariate case, but a tractable solution only exists for 2 groups.[30] Approximations exist for more than two groups, and they are both calledBox's M test.

See also

[edit]

Notes

[edit]
  1. ^The spellingshomoskedasticity andheteroskedasticity are also frequently used.

References

[edit]
  1. ^For the Greek etymology of the term, seeMcCulloch, J. Huston (1985). "On Heteros*edasticity".Econometrica.53 (2): 483.JSTOR 1911250.
  2. ^abcdWhite, Halbert (1980). "A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity".Econometrica.48 (4):817–838.CiteSeerX 10.1.1.11.7646.doi:10.2307/1912934.JSTOR 1912934.
  3. ^abcGujarati, D. N.;Porter, D. C. (2009).Basic Econometrics (Fifth ed.). Boston: McGraw-Hill Irwin. p. 400.ISBN 9780073375779.
  4. ^Goldberger, Arthur S. (1964).Econometric Theory. New York: John Wiley & Sons. pp. 238–243.ISBN 9780471311010.{{cite book}}:ISBN / Date incompatibility (help)
  5. ^Johnston, J. (1972).Econometric Methods. New York: McGraw-Hill. pp. 214–221.
  6. ^abcAngrist, Joshua D.; Pischke, Jörn-Steffen (2009-12-31).Mostly Harmless Econometrics: An Empiricist's Companion. Princeton University Press.doi:10.1515/9781400829828.ISBN 978-1-4008-2982-8.
  7. ^Long, J. Scott; Trivedi, Pravin K. (1993). "Some Specification Tests for the Linear Regression Model". In Bollen, Kenneth A.; Long, J. Scott (eds.).Testing Structural Equation Models. London: Sage. pp. 66–110.ISBN 978-0-8039-4506-7.
  8. ^Engle, Robert F. (July 1982). "Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation".Econometrica.50 (4):987–1007.doi:10.2307/1912773.ISSN 0012-9682.JSTOR 1912773.
  9. ^Peter Kennedy,A Guide to Econometrics, 5th edition, p. 137.
  10. ^Jinadasa, Gamage; Weerahandi, Sam (1998). "Size performance of some tests in one-way anova".Communications in Statistics - Simulation and Computation.27 (3): 625.doi:10.1080/03610919808813500.
  11. ^Bathke, A (2004). "The ANOVA F test can still be used in some balanced designs with unequal variances and nonnormal data".Journal of Statistical Planning and Inference.126 (2):413–422.doi:10.1016/j.jspi.2003.09.010.
  12. ^Fox, J. (1997).Applied Regression Analysis, Linear Models, and Related Methods. California: Sage Publications. p. 306. (Cited in Gujarati et al. 2009, p. 400)
  13. ^Mankiw, N. G. (1990)."A Quick Refresher Course in Macroeconomics".Journal of Economic Literature.28 (4): 1645–1660 [p. 1648].doi:10.3386/w3256.JSTOR 2727441.
  14. ^Giles, Dave (May 8, 2013)."Robust Standard Errors for Nonlinear Models".Econometrics Beat.
  15. ^Gourieroux, C.; Monfort, A.; Trognon, A. (1984)."Pseudo Maximum Likelihood Methods: Theory".Econometrica.52 (3):681–700.doi:10.2307/1913471.ISSN 0012-9682.
  16. ^Ginker, T.; Lieberman, O. (2017). "Robustness of binary choice models to conditional heteroscedasticity".Economics Letters.150:130–134.doi:10.1016/j.econlet.2016.11.024.
  17. ^Greene, William H. (2012)."Estimation and Inference in Binary Choice Models".Econometric Analysis (Seventh ed.). Boston: Pearson Education. pp. 730–755 [p. 733].ISBN 978-0-273-75356-8.
  18. ^Tofallis, C (2008)."Least Squares Percentage Regression".Journal of Modern Applied Statistical Methods.7:526–534.doi:10.2139/ssrn.1406472.SSRN 1406472.
  19. ^J. N. K. Rao (March 1973). "On the Estimation of Heteroscedastic Variances".Biometrics.29 (1):11–24.doi:10.2307/2529672.JSTOR 2529672.
  20. ^Breusch, T. S.; Pagan, A. R. (1979)."A Simple Test for Heteroscedasticity and Random Coefficient Variation".Econometrica.47 (5):1287–1294.doi:10.2307/1911963.ISSN 0012-9682.JSTOR 1911963.
  21. ^Ullah, Muhammad Imdad (2012-07-26)."Breusch Pagan Test for Heteroscedasticity".Basic Statistics and Data Analysis. Retrieved2020-11-28.
  22. ^abPryce, Gwilym."Heteroscedasticity: Testing and Correcting in SPSS"(PDF). pp. 12–18.Archived(PDF) from the original on 2017-03-27. Retrieved26 March 2017.
  23. ^Baum, Christopher F. (2006)."Stata Tip 38: Testing for Groupwise Heteroskedasticity".The Stata Journal: Promoting Communications on Statistics and Stata.6 (4):590–592.doi:10.1177/1536867X0600600412.ISSN 1536-867X.S2CID 117349246.
  24. ^R. E. Park (1966). "Estimation with Heteroscedastic Error Terms".Econometrica.34 (4): 888.doi:10.2307/1910108.JSTOR 1910108.
  25. ^Glejser, H. (1969). "A new test for heteroscedasticity".Journal of the American Statistical Association.64 (325):316–323.doi:10.1080/01621459.1969.10500976.
  26. ^Machado, José A. F.; Silva, J. M. C. Santos (2000). "Glejser's test revisited".Journal of Econometrics.97 (1):189–202.doi:10.1016/S0304-4076(00)00016-6.
  27. ^Hamsici, Onur C.; Martinez, Aleix M. (2007)"Spherical-Homoscedastic Distributions: The Equivalency of Spherical and Normal Distributions in Classification",Journal of Machine Learning Research, 8, 1583-1623
  28. ^Holgersson, H. E. T.; Shukur, G. (2004). "Testing for multivariate heteroscedasticity".Journal of Statistical Computation and Simulation.74 (12): 879.doi:10.1080/00949650410001646979.hdl:2077/24416.S2CID 121576769.
  29. ^Gupta, A. K.; Tang, J. (1984). "Distribution of likelihood ratio statistic for testing equality of covariance matrices of multivariate Gaussian models".Biometrika.71 (3):555–559.doi:10.1093/biomet/71.3.555.JSTOR 2336564.
  30. ^d'Agostino, R. B.; Russell, H. K. (2005). "Multivariate Bartlett Test".Encyclopedia of Biostatistics.doi:10.1002/0470011815.b2a13048.ISBN 978-0470849071.

Further reading

[edit]

Most statistics textbooks will include at least some material on homoscedasticity and heteroscedasticity. Some examples are:

External links

[edit]
Wikimedia Commons has media related toHeteroscedasticity.
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Homoscedasticity_and_heteroscedasticity&oldid=1335265252"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp