Wald test

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In statistics, the Wald test is one of three classical approaches to hypothesis testing, together with the Lagrange multiplier and the likelihood-ratio test. It is based on the asymptotic normality of the estimator, specifically in that it tests whether the difference between the unrestricted parameter estimate and the hypothesized value (scaled by the unrestricted precision matrix) is statistically significant; under the null hypothesis, the test statistic has an asymptotic χ2-distribution with degrees of freedom equal to the number of restrictions. If the hypothesis involves only a single restriction, then reduces to a squared (pseudo) t-ratio that is, however, not actually t-distributed.[1] The finite sample distributions of Wald tests are generally unknown.

An advantage of the Wald test is that it only requires the estimation of the unrestricted model, which lowers the computational burden as compared to the likelihood-ratio test. However, a major disadvantage is that (in finite samples) it is not invariant to changes in the representation of the null hypothesis; in other words, algebraically equivalent expressions of non-linear parameter restriction can lead to different values of the test statistic.[2][3] That is because the Wald statistic is derived from a Taylor expansion,[4] and different ways of writing equivalent nonlinear expressions lead to nontrivial differences in the corresponding Taylor coefficients. Another aberration, known as the Hauck–Donner effect, can occur in binomial models when is close to the boundary of the parameter space—for instance a fitted probability being extremely close to zero or one—which results in the Wald test no longer monotonically increasing in the distance between the and .[5]

Mathematical details[edit]

Under the Wald statistical test, the maximum likelihood estimate of the parameter(s) of interest is compared with the proposed value , with the assumption that the difference between the two will be approximately normally distributed. Typically the square of the difference is compared to a chi-squared distribution.

Test on a single parameter[edit]

In the univariate case, the Wald statistic is

which is compared against a chi-squared distribution.

Alternatively, the difference can be compared to a normal distribution. In this case, the test statistic is

where is the standard error of the maximum likelihood estimate (MLE). A reasonable estimate of the standard error for the MLE can be given by , where is the Fisher information of the parameter.

Test(s) on multiple parameters[edit]

The Wald test can be used to test a single hypothesis on multiple parameters, as well as to test jointly multiple hypotheses on single/multiple parameters. Let be our sample estimator of P parameters (i.e., is a P 1 vector), which is supposed to follow asymptotically a normal distribution with covariance matrix V, . The test of Q hypotheses on the P parameters is expressed with a Q P matrix R:

The test statistic is:

where is an estimator of the covariance matrix.[6]

Proof

Suppose . Then, by Slutsky's theorem and by the properties of the normal distribution, multiplying by R has distribution:

Recalling that a quadratic form of normal distribution has a Chi-squared distribution:

Rearranging n finally gives:

What if the covariance matrix is not known a-priori and needs to be estimated from the data? If we have a consistent estimator of , then by the independence of the covariance estimator and equation above, we have:

Nonlinear hypothesis[edit]

In the standard form, the Wald test is used to test linear hypotheses, that can be represented by a single matrix R. If one wishes to test a non-linear hypothesis of the form:

The test statistic becomes:

where is the derivative of c evaluated at the sample estimator. This result is obtained using the delta method, which uses a first order approximation of the variance.

Non-invariance to re-parameterisations[edit]

The fact that one uses an approximation of the variance has the drawback that the Wald statistic is not-invariant to a non-linear transformation/reparametrisation of the hypothesis: it can give different answers to the same question, depending on how the question is phrased.[7][8] For example, asking whether R = 1 is the same as asking whether log R = 0; but the Wald statistic for R = 1 is not the same as the Wald statistic for log R = 0 (because there is in general no neat relationship between the standard errors of R and log R, so it needs to be approximated).

Alternatives to the Wald test[edit]

There exist several alternatives to the Wald test, namely the likelihood-ratio test and the Lagrange multiplier test (also known as the score test). Robert F. Engle showed that these three tests, the Wald test, the likelihood-ratio test and the Lagrange multiplier test are asymptotically equivalent.[9] Although they are asymptotically equivalent, in finite samples, they could disagree enough to lead to different conclusions.

There are several reasons to prefer the likelihood ratio test or the Lagrange multiplier to the Wald test:[10][11][12]

  • Non-invariance: As argued above, the Wald test is not invariant to a reparametrization, while the Likelihood ratio tests will give exactly the same answer whether we work with R, log R or any other monotonic transformation of R.
  • The other reason is that the Wald test uses two approximations (that we know the standard error, and that the distribution is χ2), whereas the likelihood ratio test uses one approximation (that the distribution is χ2).[citation needed]
  • The Wald test requires an estimate under the alternative hypothesis, corresponding to the "full" model. In some cases, the model is simpler under the zero hypothesis, so that one might prefer to use the score test (also called Lagrange Multiplier test), which has the advantage that it can be formulated in situations where the variability is difficult to estimate; e.g. the Cochran–Mantel–Haenzel test is a score test.[13]

See also[edit]

References[edit]

  1. ^ Davidson, Russell; MacKinnon, James G. (1993). "The Method of Maximum Likelihood : Fundamental Concepts and Notation". Estimation and Inference in Econometrics. New York: Oxford University Press. p. 89. ISBN 0-19-506011-3.
  2. ^ Gregory, Allan W.; Veall, Michael R. (1985). "Formulating Wald Tests of Nonlinear Restrictions". Econometrica. 53 (6): 1465–1468. JSTOR 1913221.
  3. ^ Dagenais, Marcel G.; Dufour, Jean-Marie (1991). "Invariance, Nonlinear Models, and Asymptotic Tests". Econometrica. 59 (6): 1601–1615. JSTOR 2938281.
  4. ^ Hayashi, Fumio (2000). Econometrics. Princeton: Princeton University Press. pp. 489–491. ISBN 1-4008-2383-8.,
  5. ^ Hauck, Walter W., Jr.; Donner, Allan (1977). "Wald's Test as Applied to Hypotheses in Logit Analysis". Journal of the American Statistical Association. 72 (360a): 851–853. doi:10.1080/01621459.1977.10479969.
  6. ^ Harrell, Frank E., Jr. (2001). "Section 9.3.1". Regression modeling strategies. New York: Springer-Verlag. ISBN 0387952322.
  7. ^ Fears, Thomas R.; Benichou, Jacques; Gail, Mitchell H. (1996). "A reminder of the fallibility of the Wald statistic". The American Statistician. 50 (3): 226–227. doi:10.1080/00031305.1996.10474384.
  8. ^ Gregory, Allan W.; Veall, Michael R. (1985). "Formulating Wald Tests of Nonlinear Restrictions". Econometrica. 53 (6): 1465–1468. doi:10.2307/1913221.
  9. ^ Engle, Robert F. (1983). "Wald, Likelihood Ratio, and Lagrange Multiplier Tests in Econometrics". In Intriligator, M. D.; Griliches, Z. (eds.). Handbook of Econometrics. II. Elsevier. pp. 796–801. ISBN 978-0-444-86185-6.
  10. ^ Harrell, Frank E., Jr. (2001). "Section 9.3.3". Regression modeling strategies. New York: Springer-Verlag. ISBN 0387952322.
  11. ^ Collett, David (1994). Modelling Survival Data in Medical Research. London: Chapman & Hall. ISBN 0412448807.
  12. ^ Pawitan, Yudi (2001). In All Likelihood. New York: Oxford University Press. ISBN 0198507658.
  13. ^ Agresti, Alan (2002). Categorical Data Analysis (2nd ed.). Wiley. p. 232. ISBN 0471360937.

Further reading[edit]

External links[edit]