In applied statistics, (e.g., applied to the social sciences and psychometrics), common-method variance (CMV) is the spurious "variance that is attributable to the measurement method rather than to the constructs the measures are assumed to represent" or equivalently as "systematic error variance shared among variables measured with and introduced as a function of the same method and/or source". For example, an electronic survey method might influence results for those who might be unfamiliar with an electronic survey interface differently than for those who might be familiar. If measures are affected by CMV or common-method bias, the intercorrelations among them can be inflated or deflated depending upon several factors. Although it is sometimes assumed that CMV affects all variables, evidence suggests that whether or not the correlation between two variables is affected by CMV is a function of both the method and the particular constructs being measured.
Several ex ante remedies exist that help to avoid or minimize possible common method variance. Important remedies have been compiled and discussed by Chang et al. (2010), Lindell & Whitney (2001) and Podsakoff et al. (2003).
Using simulated data sets, Richardson et al. (2009) investigate three ex post techniques to test for common method variance: the correlational marker technique, the confirmatory factor analysis (CFA) marker technique, and the unmeasured latent method construct (ULMC) technique. Only the CFA marker technique turns out to provide some value. A comprehensive example of this technique has been demonstrated by Williams et al. (2010). Kock (2015) discusses a full collinearity test that is successful in the identification of common method bias with a model that nevertheless passes standard convergent and discriminant validity assessment criteria based on a CFA.
- Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.-Y.; Podsakoff, N.P. (October 2003). "Common method biases in behavioral research: A critical review of the literature and recommended remedies" (PDF). Journal of Applied Psychology 88 (5): 879–903. doi:10.1037/0021-9010.88.5.879. PMID 14516251.
- Richardson, H.A.; Simmering, M.J.; Sturman, M.C. (October 2009). "A tale of three perspectives: Examining post hoc statistical techniques for detection and correction of common method variance". Organizational Research Methods 12 (4): 762–800. doi:10.1177/1094428109332834.
- Williams, L. J.; Brown, B. K. (1994). "Method variance in organizational behavior and human resources research: Effects on correlations, path coefficients, and hypothesis testing". Organizational Behavior and Human Decision Processes 57 (2): 185–209. doi:10.1006/obhd.1994.1011.
- Spector, P. E. (2006). "Method Variance in Organizational Research: Truth or Urban Legend?". Organizational Research Methods 9 (2): 221–232. doi:10.1177/1094428105284955.
- Chang, S.-J.; van Witteloostuijn, A.; Eden, L. (2010). "Common method variance in international business research". Journal of International Business Studies 41: 178–184. doi:10.1057/jibs.2009.88.
- Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology, 86(1), 114.
- Williams, L.J.; Hartman, N.; Cavazotte, F. (July 2010). "Method variance and marker variables: A review and comprehensive CFA marker technique". Organizational Research Methods 13 (3): 477–514. doi:10.1177/1094428110366036.
- Kock, N. (2015). Common method bias in PLS-SEM: A full collinearity assessment approach. International Journal of e-Collaboration, 11(4), 1-10.
- Kock, N.; Lynn, G. S. (2012). "Lateral collinearity and misleading results in variance-based SEM: An illustration and recommendations" (PDF). Journal of the Association for Information Systems 13 (7): 546–580.
|This statistics-related article is a stub. You can help Wikipedia by expanding it.|