Structural equation modeling
Structural equation modelling (SEM) is a statistical technique for testing and estimating causal relations using a combination of statistical data and qualitative causal assumptions. This definition of SEM was articulated by the geneticist Sewall Wright (1921), the economist Trygve Haavelmo (1943) and the cognitive scientist Herbert A. Simon (1953), and formally defined by Judea Pearl (2000) using a calculus of counterfactuals.
Structural equation models (SEM) allow both confirmatory and exploratory modeling, meaning they are suited to both theory testing and theory development. Confirmatory modeling usually starts out with a hypothesis that gets represented in a causal model. The concepts used in the model must then be operationalized to allow testing of the relationships between the concepts in the model. The model is tested against the obtained measurement data to determine how well the model fits the data. The causal assumptions embedded in the model often have falsifiable implications which can be tested against the data.
With an initial theory, SEM can be used inductively by specifying a corresponding model and using data to estimate the values of free parameters. Often the initial hypothesis requires adjustment in light of model evidence. SEM can be used purely for exploration; this would usually be a technique similar to exploratory factor analysis, a technique commonly used in psychometrics.
Among the strengths of SEM is the ability to construct latent variables: variables that are not measured directly, but are estimated in the model from several measured variables, each of which is predicted to 'tap into' the latent variables. This allows the modeler to explicitly capture the unreliability of measurement in the model, which in theory allows the structural relations between latent variables to be accurately estimated. Factor analysis, path analysis and regression all represent special cases of SEM.
In SEM, the qualitative causal assumptions are represented by the missing variables in each equation, as well as vanishing covariances among some error terms. These assumptions are testable in experimental studies and must be confirmed judgmentally in observational studies.
- 1 Equivalent models
- 2 Steps in performing SEM analysis
- 3 Advanced uses
- 4 SEM-specific software
- 5 See also
- 6 References
- 7 Further reading
- 8 External links
In SEM, many models are equivalent in that they predict the same mean vector and covariance matrix. A "cleaned" model representation would be to model the mean and covariance matrix directly. That is, a "Clean Normal Model" (CNM) is a model with a function for every entry of the covariance matrix and mean. In terms of path diagrams, CNM is the subset of SEMs that only have squares connected by double-headed edges. CNMs are not popular for at least two reasons:
- CNMs are very difficult for human readers to interpret. Humans typically like to think of covariances as common sources or causations. It helps people to think of models, even though it also entails the danger of over-interpreting regressions as causations. Note that this mere re-representation had big successes: The IQ index, for example, although by definition not existent in the real world, has arguably the greatest success in psychology, with lots of predictive power for many things.
- It helps us to integrate variables that we propose do exist, but have not been measured (yet). We could for example build a model in which ion flow in certain brain cells is a latent variable and in that way make a prediction about the covariance of the ion flow to observable variables, which, if someone invents a measurement instrument that allows the measurement of this flow in the specific region, can be falsified or confirmed.
Steps in performing SEM analysis
When SEM is used as a confirmatory technique, the model must be specified correctly based on the type of analysis that the researcher is attempting to confirm. When building the correct model, the researcher uses two different kinds of variables, namely exogenous and endogenous variables. The distinction between these two types of variables is whether the variable regresses on another variable or not. As in regression, the dependent variable (DV) regresses on the independent variable (IV), meaning that the DV is being predicted by the IV. In SEM terminology, other variables regress on exogenous variables, but exogenous variables never regress on other variables. In a directed graph of the model, an exogenous variable is recognizable as any variable from which arrows only emanate, where the emanating arrows denote which variables that exogenous variable predicts. Any variable that regresses on another variable is defined to be an endogenous variable, even if other variables regress on it. In a directed graph, an endogenous variable is recognizable as any variable receiving an arrow.
It is important to note that SEM is more general than regression. In particular, a variable can act as both independent and dependent variable.
Two main components of models are distinguished in SEM: the structural model showing potential causal dependencies between endogenous and exogenous variables, and the measurement model showing the relations between latent variables and their indicators. Exploratory and Confirmatory factor analysis models, for example, contain only the measurement part, while path diagrams can be viewed as SEMs that contain only the structural part.
In specifying pathways in a model, the modeler can posit two types of relationships: (1) free pathways, in which hypothesized causal (in fact counterfactual) relationships between variables are tested, and therefore are left 'free' to vary, and (2) relationships between variables that already have an estimated relationship, usually based on previous studies, which are 'fixed' in the model.
A modeler will often specify a set of theoretically plausible models in order to assess whether the model proposed is the best of the set of possible models. Not only must the modeler account for the theoretical reasons for building the model as it is, but the modeler must also take into account the number of data points and the number of parameters that the model must estimate to identify the model. An identified model is a model where a specific parameter value uniquely identifies the model, and no other equivalent formulation can be given by a different parameter value. A data point is a variable with observed scores, like a variable containing the scores on a question or the number of times respondents buy a car. The parameter is the value of interest, which might be a regression coefficient between the exogenous and the endogenous variable or the factor loading (regression coefficient between an indicator and its factor). If there are fewer data points than the number of estimated parameters, the resulting model is "unidentified", since there are too few reference points to account for all the variance in the model. The solution is to constrain one of the paths to zero, which means that it is no longer part of the model.
Estimation of free parameters
Parameter estimation is done by comparing the actual covariance matrices representing the relationships between variables and the estimated covariance matrices of the best fitting model. This is obtained through numerical maximization of a fit criterion as provided by maximum likelihood estimation, quasi-maximum likelihood estimation, weighted least squares or asymptotically distribution-free methods. This is often accomplished by using a specialized SEM analysis program, of which several exist.
Assessment of model and model fit
Having estimated a model, analysts will want to interpret the model. Estimated paths may be tabulated and/or presented graphically as a path model. The impact of variables is assessed using path tracing rules (see path analysis).
It is important to examine the "fit" of an estimated model to determine how well it models the data. This is a basic task in SEM modeling: forming the basis for accepting or rejecting models and, more usually, accepting one competing model over another. The output of SEM programs includes matrices of the estimated relationships between variables in the model. Assessment of fit essentially calculates how similar the predicted data are to matrices containing the relationships in the actual data.
Formal statistical tests and fit indices have been developed for these purposes. Individual parameters of the model can also be examined within the estimated model in order to see how well the proposed model fits the driving theory. Most, though not all, estimation methods make such tests of the model possible.
Of course as in all statistical hypothesis tests, SEM model tests are based on the assumption that the correct and complete relevant data have been modeled. In the SEM literature, discussion of fit has led to a variety of different recommendations on the precise application of the various fit indices and hypothesis tests.
There are differing approaches to assessing fit. Traditional approaches to modeling start from a null hypothesis, rewarding more parsimonious models (i.e. those with fewer free parameters), to others such as AIC that focus on how little the fitted values deviate from a saturated model (i.e. how well they reproduce the measured values), taking into account the number of free parameters used. Because different measures of fit capture different elements of the fit of the model, it is appropriate to report a selection of different fit measures.
Some of the more commonly used measures of fit include:
- Chi-Squared A fundamental measure of fit used in the calculation of many other fit measures. Conceptually it is a function of the sample size and the difference between the observed covariance matrix and the model covariance matrix.
- Akaike information criterion (AIC)
- Root Mean Square Error of Approximation (RMSEA)
- Another test of model fit. RMSEA values <.05 are considered to indicated good fit. An RMSEA of .1 or more is often taken to indicate poor fit.
- Standardized Root Mean Residual (SRMR)
- The SRMR is a popular absolute fit indicator. A good model should have an SRMR smaller than .05.
- Comparative Fit Index (CFI)
- In examining baseline comparisons, the CFI depends in large part on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high. A CFI value of .90 or higher is desirable.
For each measure of fit, a decision as to what represents a good-enough fit between the model and the data must reflect other contextual factors such as sample size (for instance very large samples make the Chi-squared test overly sensitive), the ratio of indicators to factors, and the overall complexity of the model.
The model may need to be modified in order to improve the fit, thereby estimating the most likely relationships between variables. Many programs provide modification indices which may guide minor modifications. Modification indices report the change in χ² that result from freeing fixed parameters: usually, therefore adding a path to a model which is currently set to zero. Modifications that improve model fit may be flagged as potential changes that can be made to the model. Modifications to a model, especially the structural model, are changes to the theory claimed to be true. Modifications therefore must make sense in terms of the theory being tested, or be acknowledged as limitations of that theory. Changes to measurement model are effectively claims that the items/data are impure indicators of the latent variables specified by theory.
Models should not be led by MI, as Maccallum (1986) demonstrated: "even under favorable conditions, models arising from specification searches must be viewed with caution."
Sample size and power
Where the proposed SEM is the basis for a research hypothesis, ad hoc rules of thumb requiring the choosing of 10 observations per indicator in setting a lower bound for the adequacy of sample sizes have been widely used since their original articulation by Nunnally (1967). Being linear in model constructs, these are easy to compute, but have been found to result in sample sizes that are too small. One study found that sample sizes in a particular stream of SEM literature averaged only 50% of the minimum measurements needed to draw the conclusions the studies claimed. Overall, 80% of the research articles in the study drew conclusions from insufficient samples. Complexities which increase information demands in structural model estimation increase with the number of potential combinations of latent variables; while the information supplied for estimation increases with the number of measured parameters times the number of observations in the sample size – both are non-linear. Sample size in SEM can be computed through two methods: the first as a function of the ratio of indicator variables to latent variables, and the second as a function of minimum effect, power and significance. Software and methods for computing both have been developed by Westland (2010). The theory of power equivalence by von Oertzen (2010) formally describes equivalence classes of SEM with equal power. This allows an analytic trade-off of design parameters, including sample sizes, indicator reliability, or number of indicators, of a SEM while keeping power constant.
Interpretation and communication
The set of models are then interpreted so that claims about the constructs can be made, based on the best fitting model.
Caution should always be taken when making claims of causality even when experimentation or time-ordered studies have been done. The term causal model must be understood to mean: "a model that conveys causal assumptions," not necessarily a model that produces validated causal conclusions. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiment cannot rule out all such threats to causal inference. Good fit by a model consistent with one causal hypothesis invariably entails equally good fit by another model consistent with an opposing causal hypothesis. No research design, no matter how clever, can help distinguish such rival hypotheses, save for interventional experiments.
As in any science, subsequent replication and perhaps modification will proceed from the initial finding.
- Measurement invariance
- Multiple group modelling: This is a technique allowing joint estimation of multiple models, each with different sub-groups. Applications include behavior genetics, and analysis of differences between groups (e.g., gender, cultures, test forms written in different languages, etc.).
- Latent growth modeling
- Hierarchical/multilevel models; item response theory models
- Mixture model (latent class) SEM
- Alternative estimation and testing techniques
- Robust inference
- Survey sampling analyses
- Multi-method multi-trait models
- Structural Equation Model Trees
- Open source software
- Commercial packages
- Wright, Sewall S. (1921). "Correlation and causation". Journal of Agricultural Research 20: 557–85.
- Simon, Herbert (1953). "Causal ordering and identifiability". In Hood, W.C.; Koopmans, T.C. Studies in Econometric Method. New York: Wiley. pp. 49–74.
- Pearl, Judea (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press. ISBN 0-521-77362-8.
- Bollen, K A, and Long, S J (1993) Testing Structural Equation Models. SAGE Focus Edition, vol. 154, ISBN 0-8039-4507-8
- Loehlin, J. C. (2004). Latent Variable Models: An Introduction to Factor, Path, and Structural Equation Analysis. Psychology Press.
- MacCallum, R. (1986). Specification searches in covariance structure modeling. Psychological Bulletin, 100, 107-120. doi
- Nunnally, J. C. (1967). "Psychometric Theory". McGraw-Hill, New York: 355.
- Westland, J. Christopher (2010). "Lower bounds on sample size in structural equation modeling". Electron. Comm. Res. Appl. 9 (6): 476–487. doi:10.1016/j.elerap.2010.07.003.
- von Oertzen, T. (2010). "Power equivalence in structural equation modelling". British Journal of Mathematical and Statistical Psychology 62 (2): 257–272.
- Bagozzi, R.; Yi, Y. (2012) "Specification, evaluation, and interpretation of structural equation models". Journal of the Academy of Marketing Science, 40 (1), 8–34. doi:10.1007/s11747-011-0278-x
- Bartholomew, D J, and Knott, M (1999) Latent Variable Models and Factor Analysis Kendall's Library of Statistics, vol. 7. Arnold publishers, ISBN 0-340-69243-X
- Bentler, P.M. & Bonett, D.G. (1980). "Significance tests and goodness of fit in the analysis of covariance structures". Psychological Bulletin, 88, 588-606.
- Bollen, K A (1989). Structural Equations with Latent Variables. Wiley, ISBN 0-471-01171-1
- Byrne, B. M. (2001) Structural Equation Modeling with AMOS - Basic Concepts, Applications, and Programming.LEA, ISBN 0-8058-4104-0
- Goldberger, A. S. (1972). Structural equation models in the social sciences. Econometrica 40, 979- 1001.
- Haavelmo, T. (1943) "The statistical implications of a system of simultaneous equations," Econometrica 11:1–2. Reprinted in D.F. Hendry and M.S. Morgan (Eds.), The Foundations of Econometric Analysis, Cambridge University Press, 477—490, 1995.
- Hair, Joe F., G. Tomas M. Hult, Christian M. Ringle, and Marko Sarstedt. 2013. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). Thousand Oaks: Sage. http://www.sagepub.com/books/Book237345
- Hoyle, R H (ed) (1995) Structural Equation Modeling: Concepts, Issues, and Applications. SAGE, ISBN 0-8039-5318-6
- Kaplan, D (2000) Structural Equation Modeling: Foundations and Extensions. SAGE, Advanced Quantitative Techniques in the Social Sciences series, vol. 10, ISBN 0-7619-1407-2
- Kline, R. B. (2010) Principles and Practice of Structural Equation Modeling (3rd Edition). The Guilford Press, ISBN 978-1-60623-877-6
- Jöreskog, K.; F. Yang (1996). "Non-linear structural equation models: The Kenny-Judd model with interaction effects". In G. Marcoulides and R. Schumacker, (eds.), Advanced structural equation modeling: Concepts, issues, and applications. Thousand Oaks, CA: Sage Publications.
- Ed Rigdon's Structural Equation Modeling Page: people, software and sites
- Structural equation modeling page under David Garson's StatNotes, NCSU
- Issues and Opinion on Structural Equation Modeling, SEM in IS Research
- The causal interpretation of structural equations (or SEM survival kit) by Judea Pearl 2000.
- Structural Equation Modeling Reference List by Jason Newsom: journal articles and book chapters on structural equation models
- PLS-SEM book: online resources and additional information
- Path Analysis in AFNI: The open source (GPL) AFNI package contains SEM code
- Handbook of Management Scales, a collection of previously used multi-item scales to measure constructs for SEM