Statistical conclusion validity

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Statistical conclusion validity is the degree to which conclusions about the relationship among variables based on the data are correct or ‘reasonable’. This began as being solely about whether the statistical conclusion about the relationship of the variables was correct, but now there is a movement towards moving to ‘reasonable’ conclusions that use: quantitative, statistical, and qualitative data.[1] Fundamentally, two types of errors can occur: type I (finding a difference or correlation when none exists) and type II (finding no difference when one exists). Statistical conclusion validity concerns the qualities of the study that make these types of errors more likely. Statistical conclusion validity involves ensuring the use of adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures. [2] [3] [4]

Common threats[edit]

The most common threats to statistical conclusion validity are:

Low statistical power[edit]

Power is the probability of correctly rejecting the null hypothesis when it is false (inverse of the type II error rate). Experiments with low power have a higher probability of incorrectly accepting the null hypothesis—that is, committing a type II error and concluding that there is no effect when there actually is (I.e. there is real covariation between the cause and effect). Low power occurs when the sample size of the study is too small given other factors (small effect sizes, large group variability, unreliable measures, etc.).

Violated assumptions of the test statistics[edit]

Most statistical tests (particularly inferential statistics) involve assumptions about the data that make the analysis suitable for testing a hypothesis. Violating the assumptions of statistical tests can lead to incorrect inferences about the cause-effect relationship. The robustness of a test indicates how sensitive it is to violations. Violations of assumptions may make tests more or less likely to make type I or II errors.

Fishing and the error rate problem[edit]

Each hypothesis testing involves a set risk of a type I error (the alpha rate). If a researcher searches or "fishes" through their data, testing many different hypotheses to find a significant effect, they are inflating their type I error rate. The more the researcher repeatedly tests the data, the higher the chance of observing a type I error and making an incorrect inference about the existence of a relationship.

Unreliability of measures[edit]

If the dependent and/or independent variable(s) are not measured reliably (i.e., with large amounts of measurement error), incorrect conclusions can be drawn.

Restriction of range[edit]

Restriction of range, such as floor and ceiling effects or selection effects, reduce the power of the experiment, and increase the chance of a type II error.[5] This is because correlations are attenuated (weakened) by reduced variability (see, for example, the equation for the Pearson product-moment correlation coefficient which uses score variance in its estimation).

Heterogeneity of the units under study[edit]

Greater heterogeneity of individuals participating in the study can also impact interpretations of results by increasing the variance of results or obscuring true relationships (see also sampling error. , the higher the standard deviation will be. This obscures possible interactions between the characteristics of the units and the cause-effect relationship.

Threats to Internal Validity[edit]

Any effect that can impact the internal validity of a research study may bias the results and impact the validity of statistical conclusions reached. These threats to internal validity include unreliability of treatment implementation (lack of standardization) or failing to control for extraneous variables.

See also[edit]

References[edit]

  1. ^ Cozby, Paul C. (2009). Methods in behavioral research (10th ed. ed.). Boston: McGraw-Hill Higher Education. 
  2. ^ Cohen, R. J.; Swerdlik, M. E. (2004). Psychological testing and assessment (6th edition). Sydney: McGraw-Hill. 
  3. ^ Cook, T. D.; Campbell, D. T.; Day, A. (1979). Quasi-experimentation: Design & analysis issues for field settings. Houghton Mifflin Boston. 
  4. ^ Shadish, W.; Cook, T. D.; Campbell, D. T. (2006). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin. 
  5. ^ Sackett, C.M.; Lievens, F.; Berry; Landers, R.N. (2007). "A Cautionary Note on the Effects of Range Restriction on Predictor Intercorrelations". Journal of Applied Psychology 92 (2): 538–544. doi:10.1037/0021-9010.92.2.538.