Jump to content

Heteroscedasticity

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Humuskedasticity (talk | contribs) at 07:17, 25 September 2014 (→‎top: add comment and reference to journal article discussing spelling with 'c' vs. 'k'.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Plot with random data showing heteroscedasticity.

In statistics, a collection of random variables is heteroscedastic if there are sub-populations that have different variabilities from others. Here "variability" could be quantified by the variance or any other measure of statistical dispersion. Thus heteroscedasticity is the absence of homoscedasticity. The spellings homoskedasticity and heteroskedasticity are also frequently used.[1][2]

The possible existence of heteroscedasticity is a major concern in the application of regression analysis, including the analysis of variance, because the presence of heteroscedasticity can invalidate statistical tests of significance that assume that the modelling errors are uncorrelated and normally distributed and that their variances do not vary with the effects being modelled. Similarly, in testing for differences between sub-populations using a location test, some standard tests assume that variances within groups are equal.

The term means "differing variance" and comes from the Greek "hetero" ('different') and "skedasis" ('dispersion').

Definition

Suppose there is a sequence of random variables {Yt}t=1n and a sequence of vectors of random variables, {Xt}t=1n. In dealing with conditional expectations of Yt given Xt, the sequence {Yt}t=1n is said to be heteroscedastic if the conditional variance of Yt given Xt, changes with t. Some authors refer to this as conditional heteroscedasticity to emphasize the fact that it is the sequence of conditional variances that changes and not the unconditional variance. In fact, it is possible to observe conditional heteroscedasticity even when dealing with a sequence of unconditional homoscedastic random variables; however, the opposite does not hold. If the variance changes only because of changes in value of X and not because of a dependence on the index t, the changing variance might be described using a scedastic function.

When using some statistical techniques, such as ordinary least squares (OLS), a number of assumptions are typically made. One of these is that the error term has a constant variance. This might not be true even if the error term is assumed to be drawn from identical distributions.

For example, the error term could vary or increase with each observation, something that is often the case with cross-sectional or time series measurements. Heteroscedasticity is often studied as part of econometrics, which frequently deals with data exhibiting it. While the influential 1980 paper by Halbert White used the term "heteroskedasticity" rather than "heteroscedasticity",[3] the latter spelling has been employed more frequently in later works.[4]

Consequences

One of the assumptions of the classical linear regression model is that there is no heteroscedasticity. Breaking this assumption means that the Gauss–Markov theorem does not apply, meaning that OLS estimators are not the Best Linear Unbiased Estimators (BLUE) and their variance is not the lowest of all other unbiased estimators. Heteroscedasticity does not cause ordinary least squares coefficient estimates to be biased, although it can cause ordinary least squares estimates of the variance (and, thus, standard errors) of the coefficients to be biased, possibly above or below the true or population variance. Thus, regression analysis using heteroscedastic data will still provide an unbiased estimate for the relationship between the predictor variable and the outcome, but standard errors and therefore inferences obtained from data analysis are suspect. Biased standard errors lead to biased inference, so results of hypothesis tests are possibly wrong. For example, if OLS is performed on a heteroscedastic data set, yielding biased standard error estimation, a researcher might fail to reject a null hypothesis at a given significance level, when that null hypothesis was actually uncharacteristic of the actual population (making a type II error).

Under certain assumptions, the OLS estimator has a normal asymptotic distribution when properly normalized and centered (even when the data does not come from a normal distribution). This result is used to justify using a normal distribution, or a chi square distribution (depending on how the test statistic is calculated), when conducting a hypothesis test. This holds even under heteroscedasticity. More precisely, the OLS estimator in the presence of heteroscedasticity is asymptotically normal, when properly normalized and centered, with a variance-covariance matrix that differs from the case of homoscedasticity. In 1980, White proposed a consistent estimator for the variance-covariance matrix of the asymptotic distribution of the OLS estimator.[3] This validates the use of hypothesis testing using OLS estimators and White's variance-covariance estimator under heteroscedasticity.

Heteroscedasticity is also a major practical issue encountered in ANOVA problems.[5] The F test can still be used in some circumstances.[6]

However, it has been said that students in econometrics should not overreact to heteroscedasticity.[4] One author wrote, "unequal error variance is worth correcting only when the problem is severe."[7] In addition, another word of caution was in the form, "heteroscedasticity has never been a reason to throw out an otherwise good model."[4][8]

With the advent of heteroscedasticity-consistent standard errors allowing for inference without specifying the conditional second moment of error term, testing conditional homoscedasticity is not as important as in the past.[citation needed]

The econometrician Robert Engle won the 2003 Nobel Memorial Prize for Economics for his studies on regression analysis in the presence of heteroscedasticity, which led to his formulation of the autoregressive conditional heteroscedasticity (ARCH) modeling technique.[citation needed]

Detection

Absolute value of residuals for simulated first order Heteroscedastic data.

There are several methods to test for the presence of heteroscedasticity. Although tests for heteroscedasticity between groups can formally be considered as a special case of testing within regression models, some tests have structures specific to this case.

Tests in regression
Tests for grouped data

These tests consist of a test statistic (a mathematical expression yielding a numerical value as a function of the data), a hypothesis that is going to be tested (the null hypothesis), an alternative hypothesis, and a statement about the distribution of statistic under the null hypothesis.

Many introductory statistics and econometrics books, for pedagogical reasons, present these tests under the assumption that the data set in hand comes from a normal distribution. A great misconception is the thought that this assumption is necessary. Most of the methods of detecting heteroscedasticity outlined above can be modified for use even when the data do not come from a normal distribution. In many cases, this assumption can be relaxed, yielding a test procedure based on the same or similar test statistics but with the distribution under the null hypothesis evaluated by alternative routes: for example, by using asymptotic distributions which can be obtained from asymptotic theory,[citation needed] or by using resampling.

Fixes

There are four common corrections for heteroscedasticity. They are:

  • View logged data. Unlogged series that are growing exponentially often appear to have increasing variability as the series rises over time. The variability in percentage terms may, however, be rather stable.
  • Use a different specification for the model (different X variables, or perhaps non-linear transformations of the X variables).
  • Apply a weighted least squares estimation method, in which OLS is applied to transformed or weighted values of X and Y. The weights vary over observations, usually depending on the changing error variances. In one variation the weights are directly related to the magnitude of the dependent variable, and this corresponds to least squares percentage regression.[15]
  • Heteroscedasticity-consistent standard errors (HCSE), while still biased, improve upon OLS estimates.[3] HCSE is a consistent estimator of standard errors in regression models with heteroscedasticity. This method corrects for heteroscedasticity without altering the values of the coefficients. This method may be superior to regular OLS because if heteroscedasticity is present it corrects for it, however, if the data is homoscedastic, the standard errors are equivalent to conventional standard errors estimated by OLS. Several modifications of the White method of computing heteroscedasticity-consistent standard errors have been proposed as corrections with superior finite sample properties.

Examples

Heteroscedasticity often occurs when there is a large difference among the sizes of the observations.

  • A classic example of heteroscedasticity is that of income versus expenditure on meals. As one's income increases, the variability of food consumption will increase. A poorer person will spend a rather constant amount by always eating inexpensive food; a wealthier person may occasionally buy inexpensive food and at other times eat expensive meals. Those with higher incomes display a greater variability of food consumption.
  • Imagine you are watching a rocket take off nearby and measuring the distance it has traveled once each second. In the first couple of seconds your measurements may be accurate to the nearest centimeter, say. However, 5 minutes later as the rocket recedes into space, the accuracy of your measurements may only be good to 100 m, because of the increased distance, atmospheric distortion and a variety of other factors. The data you collect would exhibit heteroscedasticity.

Multivariate case

The study of heteroscedasticity has been generalized to the multivariate case, which deals with the covariances of vector observations instead of the variance of scalar observations. One version of this is to use covariance matrices as the multivariate measure of dispersion. Several authors have considered tests in this context, for both regression and grouped-data situations.[16][17] Bartlett's test for heteroscedasticity between grouped data, used most commonly in the univariate case, has also been extended for the multivariate case, but a tractable solution only exists for 2 groups.[18] Approximations exist for more than two groups, and they are both called Box's M test

References

  1. ^ For the Greek etymology of the term, see J. Huston McCulloch (1985), "On Heteros*edasticity", Econometrica, 53(2), p. 483.
  2. ^ Econometrica. March 1985. J Huston McCulluch argued that there should be a k in the middle of the word and not a c. His argument was that the word had been constructed in English directly from Greek roots rather than coming into the English language indirectly via the French.
  3. ^ a b c d White, Halbert (1980). "A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity". Econometrica. 48 (4): 817–838. doi:10.2307/1912934. JSTOR 1912934.
  4. ^ a b c Gujarati, D. N.; Porter, D. C. (2009). Basic Econometrics (Fifth ed.). Boston: McGraw-Hill Irwin. p. 400. ISBN 9780073375779.
  5. ^ Jinadasa, Gamage; Weerahandi, Sam (1998). "Size performance of some tests in one-way anova". Communications in Statistics - Simulation and Computation. 27 (3): 625. doi:10.1080/03610919808813500.
  6. ^ Bathke, A (2004). "The ANOVA F test can still be used in some balanced designs with unequal variances and nonnormal data". Journal of Statistical Planning and Inference. 126 (2): 413. doi:10.1016/j.jspi.2003.09.010.
  7. ^ Fox, J. (1997). Applied Regression Analysis, Linear Models, and Related Methods. California: Sage Publications. p. 306. (Cited in Gujarati et al. 2009, p. 400)
  8. ^ Mankiw, N. G. (1990). "A Quick Refresher Course in Macroeconomics". Journal of Economic Literature. 28 (4): 1645–1660 [p. 1648]. doi:10.3386/w3256. JSTOR 2727441.
  9. ^ R. E. Park (1966). "Estimation with Heteroscedastic Error Terms". Econometrica. 34 (4): 888. doi:10.2307/1910108. JSTOR 1910108.
  10. ^ Glejser, H. (1969). "A new test for heteroscedasticity". Journal of the American Statistical Association. 64 (325): 316–323. doi:10.1080/01621459.1969.10500976.
  11. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1016/S0304-4076(00)00016-6, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1016/S0304-4076(00)00016-6 instead.
  12. ^ White's, test. "Diagnostics of Heteroscedasticity". Heteroscedasticity Remedy, dectection. http://itfeature.com. {{cite web}}: External link in |publisher= (help)
  13. ^ Breusch, Pagan test. "Diagnostics of Heteroscedasticity". Learn Statistics. http://itfeature.com. {{cite web}}: External link in |publisher= (help)
  14. ^ Goldfel, Quandt test. "Diagnostics of Heteroscedasticity". Learn Statistics. http://itfeature.com. {{cite web}}: External link in |publisher= (help)
  15. ^ Tofallis, C (2008). "Least Squares Percentage Regression". Journal of Modern Applied Statistical Methods. 7: 526–534. doi:10.2139/ssrn.1406472. SSRN 1406472.
  16. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1080/00949650410001646979, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1080/00949650410001646979 instead.
  17. ^ Gupta, A. K.; Tang, J. (1984). "Distribution of likelihood ratio statistic for testing equality of covariance matrices of multivariate Gaussian models". Biometrika. 71 (3): 555–559. doi:10.1093/biomet/71.3.555. JSTOR 2336564.
  18. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1002/0470011815.b2a13048, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1002/0470011815.b2a13048 instead.

Further reading

Most statistics textbooks will include at least some material on heteroscedasticity. Some examples are:

  • Asteriou, Dimitros; Hall, Stephen G. (2011). Applied Econometrics (Second ed.). Palgrave MacMillan. pp. 109–147. ISBN 978-0-230-27182-1.
  • Dougherty, Christopher (2002). Introduction to Econometrics. New York: Oxford University Press. pp. 220–237. ISBN 0-19-877643-8.
  • Greene, W. H. (1993). Econometric Analysis. Prentice–Hall. ISBN 0-13-013297-7. An introductory but thorough general text, considered the standard for a pre-doctorate university econometrics course;
  • Gujarati, Damodar N.; Porter, Dawn C. (2009). Basic Econometrics (Fifth ed.). New York: McGraw-Hill Irwin. pp. 365–411. ISBN 978-0-07-337577-9.
  • Hamilton, J. D. (1994). Time Series Analysis. Princeton University Press. ISBN 0-691-04289-6. The text of reference for historical series analysis; it contains an introduction to ARCH models.
  • Kmenta, Jan (1986). Elements of Econometrics (Second ed.). New York: Macmillan. pp. 269–298. ISBN 0-02-365070-2.
  • Maddala, G. S. (2001). Introduction to Econometrics (Third ed.). New York: Wiley. pp. 199–226. ISBN 0-471-49728-2.
  • Studenmund, A. H. (1992). Using Econometrics (2nd ed.). ISBN 0-673-52125-7. (devotes a chapter to heteroscedasticity)
  • Verbeek, Marno (2004). A Guide to Modern Econometrics (2nd ed.). Chichester: John Wiley & Sons.
  • Vinod, H. D. (2008). Hands On Intermediate Econometrics Using R: Templates for Extending Dozens of Practical Examples. Hackensack, NJ: World Scientific Publishers. ISBN 978-981-281-885-0. (Section 2.8 provides R snippets)

External links