Discriminant validity
In psychology, discriminant validity tests whether concepts or measurements that are not supposed to be related are actually unrelated.
Campbell and Fiske (1959) introduced the concept of discriminant validity within their discussion on evaluating test validity. They stressed the importance of using both discriminant and convergent validation techniques when assessing new tests. A successful evaluation of discriminant validity shows that a test of a concept is not highly correlated with other tests designed to measure theoretically different concepts.
In showing that two scales do not correlate, it is necessary to correct for attenuation in the correlation due to measurement error. It is possible to calculate the extent to which the two scales overlap by using the following formula where is correlation between x and y, is the reliability of x, and is the reliability of y:
Although there is no standard value for discriminant validity, a result less than 0.70 suggests that discriminant validity likely exists between the two scales. A result greater than 0.70, however, suggests that the two constructs overlap greatly and they are likely measuring the same thing, and therefore, discriminant validity between them cannot be claimed.[1]
Consider researchers developing a new scale designed to measure narcissism. They may want to show discriminant validity with a scale measuring self-esteem. Narcissism and self-esteem are theoretically different concepts, and therefore it is important that the researchers show that their new scale measures narcissism and not simply self-esteem.
First, the average inter-item correlations within and between the two scales can be calculated:
- Narcissism — Narcissism: 0.47
- Narcissism — Self-esteem: 0.30
- Self-esteem — Self-esteem: 0.52
The correction for attenuation formula can then be applied:
Since 0.607 is less than 0.85, it can be concluded that discriminant validity exists between the scale measuring narcissism and the scale measuring self-esteem. The two scales measure theoretically different constructs.
Recommended approaches to test for discriminant validity on the construct level are AVE-SE comparisons (Fornell & Larcker, 1981; note: hereby the measurement error-adjusted inter-construct correlations derived from the CFA model should be used rather than raw correlations derived from the data.)[2] and the assessment of the HTMT ratio (Henseler et al., 2014).[3] Simulation tests reveal that the former performs poorly for variance-based structural equation models (SEM), e.g. PLS, but well for covariance-based SEM, e.g. Amos, and the latter performs well for both types of SEM.[3][4] Voorhees et al. (2015) recommend combining both methods for covariance-based SEM with a HTMT cutoff of 0.85.[4] A recommended approach to test for discriminant validity on the item level is exploratory factor analysis (EFA).
See also
[edit]- Average variance extracted (AVE)
- Concurrent validity
- Construct validity
- Convergent validity
- Multitrait-multimethod matrix
- Validity (statistics)
References
[edit]- ^ Hodson, G. (2021). Construct jangle or construct mangle? Thinking straight about (nonredundant) psychological constructs. Journal of Theoretical Social Psychology. Advance online publication. https://doi.org/10.1002/jts5.120
- ^ Claes Fornell, David F. Larcker: Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. In: Journal of Marketing Research. 18, February 1981, S. 39-50.
- ^ a b Henseler, J., Ringle, C.M., Sarstedt, M., 2014. A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science 43 (1), 115–135.
- ^ a b Voorhees, C.M., Brady, M.K., Calantone, R., Ramirez, E., 2015. Discriminant validity testing in marketing: an analysis, causes for concern, and proposed remedies. Journal of the Academy of Marketing Science 1–16.
- Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105.
- John, O. P., & Benet-Martinez, V. (2000). Measurement: Reliability, construct validation, and scale construction. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social psychology (pp. 339–369). New York: Cambridge University Press.