Measurement invariance

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Measurement invariance or measurement equivalence is a statistical property of measurement that indicates that the same construct is being measured across some specified groups. For example, measurement invariance can be used to study whether a given measure is interpreted in a conceptually similar manner by respondents representing different genders or cultural backgrounds. Violations of measurement invariance may preclude meaningful interpretation of measurement data. Tests of measurement invariance are increasingly used in fields such as psychology to supplement evaluation of measurement quality rooted in classical test theory.[1]

Measurement invariance is often tested in the framework of multiple-group confirmatory factor analysis (CFA).[2] In the context of structural equation models, including CFA, measurement invariance is often termed factorial invariance.[3]

Definition[edit]

In the common factor model, measurement invariance may be defined as the following equality:

f(\textit{ \textbf{Y}} \mid \boldsymbol{\eta}, \textit{\textbf{s}}) = f(\textit{ \textbf{Y}} \mid \boldsymbol{\eta})

where f(\cdot) is a distribution function, \textit{ \textbf{Y}} is an observed score, \boldsymbol{\eta} is a factor score, and s denotes group membership (e.g., Caucasian=0, African American=1). Therefore, measurement invariance entails that given a subject's factor score, his or her observed score is not dependent on his or her group membership.[4]

Types of invariance[edit]

Several different types of measurement invariance can be distinguished in the common factor model for continuous outcomes:[5]

1) Equal form: The number of factors and the pattern of factor-indicator relationships are identical across groups.
2) Equal loadings: Factor loadings are equal across groups.
3) Equal intercepts: When observed scores are regressed on each factor, the intercepts are equal across groups.
4) Equal residual variances: The residual variances of the observed scores not accounted for by the factors are equal across groups.

The same typology can be generalized to the discrete outcomes case:

1) Equal form: The number of factors and the pattern of factor-indicator relationships are identical across groups.
2) Equal loadings: Factor loadings are equal across groups.
3) Equal thresholds: When observed scores are regressed on each factor, the thresholds are equal across groups.
4) Equal residual variances: The residual variances of the observed scores not accounted for by the factors are equal across groups.

Each of these conditions corresponds to a multiple-group confirmatory factor model with specific constraints. The tenability of each model can be tested statistically by using a likelihood ratio test or other indices of fit. Meaningful comparisons between groups usually require that all four conditions are met, which is known as strict measurement invariance. However, strict measurement invariance rarely holds in applied context.[6] Usually, this is tested by sequentially introducing additional constraints starting from the equal form condition and eventually proceeding to the equal residuals condition if the fit of the model does not deteriorate in the meantime.

Tests for invariance[edit]

Although there is need for further research on the application of various invariance tests and their respective criteria across diverse testing conditions, two approaches are common among applied researchers. For each model being compared (e.g., Equal form, Equal Intercepts) a χ2 fit statistic is iteratively estimated from the minimization of the difference between the model implied mean and covariance matrices and the observed mean and covariance matrices.[7] As long as the models under comparison are nested, the difference between the χ2 values and their respective degrees of freedom of any two CFA models of varying levels of invariance follows a χ2 distribution (diff χ2) and as such, can be inspected for significance as an indication of whether increasingly restrictive models produce appreciable changes in model-data fit.[7] However, there is some evidence the diff χ2 is sensitive to factors unrelated to changes in invariance targeted constraints (e.g., sample size).[8] As a result, researchers are also recommended to use the difference between the comparative fit index (ΔCFI) of two models specified to investigate measurement invariance. When the difference between the CFIs of two models of varying levels of measurement invariance (e.g., equal forms versus equal loadings) is greater than 0.01, then invariance in likely untenable.[8] It is important to note that the CFI values being subtracted are expected to come from nested models as in the case of diff χ2 testing;[9] however, there is indication that applied researchers rarely take this into consideration when applying the CFI test.[10]

See also[edit]

References[edit]

  1. ^ Vandenberg, Robert J. & Lance, Charles E. (2000). A Review and Synthesis of the Measurement Invariance Literature: Suggestions, Practices, and Recommendations for Organizational Research. Organizational Research Methods, 3, 4–70
  2. ^ Chen, Fang Fang, Sousa, Karen H., and West, Stephen G. (2005). Testing Measurement Invariance of Second-Order Factor Models. Structural Equation Modeling, 12, 471–492
  3. ^ Widaman K. F., Ferrer E., & Conger R. D. (2010). Factorial Invariance within Longitudinal Structural Equation Models: Measuring the Same Construct across Time. Child Dev Perspect., 4, 10–18.
  4. ^ Lubke, G. H. et al. (2003). On the relationship between sources of within- and between-group differences and measurement invariance in the common factor model, Intelligence, 31, 543–566.
  5. ^ Brown, T. (2015). Confirmatory Factor Analysis for Applied Research, Second Edition. The Guilford Press.
  6. ^ Van De Schoot, Rens; Schmidt, Peter; De Beuckelaer, Alain; Lek, Kimberley; Zondervan-Zwijnenburg, Marielle (2015-01-01). "Editorial: Measurement Invariance". Quantitative Psychology and Measurement: 1064. doi:10.3389/fpsyg.2015.01064. PMC 4516821. PMID 26283995. 
  7. ^ a b Loehlin, John (2004). Latent Variable Models: An Introduction to Factor, Path, and Structural Equation Analysis. Taylor & Francis. ISBN 9780805849103. 
  8. ^ a b Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural equation modeling,9, 233-255.
  9. ^ Widaman, Keith F.; Thompson, Jane S. (2003-03-01). "On specifying the null model for incremental fit indices in structural equation modeling". Psychological Methods 8 (1): 16–37. ISSN 1082-989X. PMID 12741671. 
  10. ^ Kline, Rex (2011). Principles and Practice of Structural Equation Modeling. Guilford Press.