Estimation statistics

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with Estimation theory.
For other uses, see Estimation (disambiguation).

Estimation statistics is a data analysis framework that uses a combination of effect sizes, confidence intervals, precision planning and meta-analysis to plan experiments, analyze data and interpret results.[1] It is distinct from null hypothesis significance testing (NHST), which is considered to be less informative.[2][3] Estimation statistics, or simply estimation, is also known as the new statistics,[3] a distinction introduced in the fields of psychology, medical research, life sciences and a wide range of other experimental sciences where NHST still remains prevalent,[4] despite estimation statistics having been recommended as preferable for several decades.[5][6]

The primary aim of estimation methods is to estimate the size of an effect and report an effect size along with its confidence intervals, the latter of which is related to the precision of the estimate.[7] Estimation at its core involves analyzing data to obtain a point estimate (an effect size calculated from data used as estimate of the population effect size) and an interval estimate that summarizes a range of likely values of the underlying population effect. Proponents of estimation see reporting a p-value as either a secondary task or an unhelpful distraction from the important business of reporting an effect size with its confidence intervals.[8]

History[edit]

Physics has for long employed a weighted averages method that is similar to meta-analysis.[9]

Estimation statistics in the modern era started with the development of the standardized effect size by Jacob Cohen in the 1960s. Research synthesis using estimation statistics was pioneered by Gene V. Glass with the development of the method of meta-analysis in the 1970s.[10] Estimation methods have been refined since by Larry Hedges, Michael Borenstein, Doug Altman, Martin Gardner, Geoff Cumming and others. The systematic review, in conjunction with meta-analysis, is a related technique with widespread use in medical research. There are now over 60,000 citations to "meta-analysis" in PubMed. Despite the widespread adoption of meta-analysis, the estimation framework is still not routinely used in primary biomedical research.[4]

In the 1990s, editor Kenneth Rothman banned the use of p-values from the journal Epidemiology; compliance was high among authors but this did not substantially change their analytical thinking.[11]

More recently, estimation methods are being adopted in fields such as neuroscience[12] and psychology.[13]

The Publication Manual of the American Psychological Association recommends estimation over hypothesis testing.[14] The Uniform Requirements for Manuscripts Submitted to Biomedical Journals document makes a similar recommendation: "Avoid relying solely on statistical hypothesis testing, such as P values, which fail to convey important information about effect size."[15]

Flaws in significance testing[edit]

In significance testing, the primary objective of statistical calculations is to obtain a p-value, the probability of seeing an obtained result, or a more extreme result, when assuming the null hypothesis is true. If the p-value is low (usually < 0.05), the statistical practitioner is then encouraged to reject the null hypothesis. Proponents of estimation reject the validity of significance testing[3][7] for the following reasons, among others:

  • P-values are easily and commonly misinterpreted. For example, the p-value is often mistakenly thought of as 'the probability that the null hypothesis is false.'
  • The null hypothesis is always wrong for every set of observations: there is always some effect, even if it is minuscule.[16]
  • Significance testing produces arbitrarily dichotomous yes-no answers, while discarding important information about magnitude.[17]
  • Any particular p-value arises through the interaction of the effect size, the sample size (all things being equal a larger sample size produces a smaller p-value) and sampling error.[18]
  • At low power, simulation reveals that sampling error makes p-values extremely volatile.[19]

John Tukey lampooned significance testing by imagining if physicists had characterized elastic materials with the statement: "when you pull on it, it gets longer."[16]

Benefits of estimation statistics[edit]

Advantages of confidence intervals[edit]

Confidence intervals behave in a predictable way. By definition, 95% confidence intervals have a 95% chance of capturing the underlying population mean (μ). This feature remains constant with increasing sample size; what changes is that the interval becomes smaller (more precise). In addition, 95% confidence intervals are also 83% prediction intervals: one experiment's confidence intervals have an 83% chance of capturing any other future experiment's mean.[3] As such, knowing a single experiment's 95% confidence intervals give the analyst a plausible range for the population mean, and plausible outcomes of any subsequent replication experiments.

Evidence based statistics[edit]

Psychological studies of the perception of statistics reveal that reporting interval estimates leaves a more accurate perception of the data than reporting p-values.[20]

Precision planning[edit]

The precision of an estimate is formally defined as 1/variance, and like power, increases (improves) with increasing sample size. Like power, a high level of precision is expensive; research grant applications would ideally include precision/cost analyses. Proponents of estimation believe precision planning should replace power since statistical power itself is conceptually linked to significance testing.[3]

See also[edit]

References[edit]

  1. ^ Ellis, Paul. "Effect size FAQ". 
  2. ^ Cohen, Jacob. "The earth is round (p<.05)". 
  3. ^ a b c d e Cumming, Geoff (2012). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. New York: Routledge. 
  4. ^ a b Button, Katherine; John P. A. Ioannidis, Claire Mokrysz, Brian A. Nosek, Jonathan Flint, Emma S. J. Robinson & Marcus R. Munafò (2013). "Power failure: why small sample size undermines the reliability of neuroscience". Nature Review Neuroscience 14: 365. doi:10.1038/nrn3475. 
  5. ^ Altman, Douglas (1991). Practical Statistics For Medical Research. London: Chapman and Hall. 
  6. ^ Douglas Altman, ed. (2000). Statistics with Confidence. London: Wiley-Blackwell. 
  7. ^ a b Cohen, Jacob (1990). "What I have Learned (So Far)". American Psychologist 45 (12): 1304. doi:10.1037/0003-066x.45.12.1304. 
  8. ^ Ellis, Paul. "Why can’t I just judge my result by looking at the p value?". Retrieved 5 June 2013. 
  9. ^ Hedges, Larry (1987). "How hard is hard science, how soft is soft science". American Psychologist 42: 443. doi:10.1037/0003-066x.42.5.443. 
  10. ^ Hunt, Morton (1997). How science takes stock: the story of meta-analysis. New York: The Russell Sage Foundation. ISBN 0-87154-398-2. 
  11. ^ Fidler, Fiona. "Editors Can Lead Researchers to Confidence Intervals, but Can't Make Them Think". 
  12. ^ Hentschke, Harald; Maik C. Stüttgen (December 2011). "Computation of measures of effect size for neuroscience data sets". European Journal of Neuroscience 34 (12): 1887–1894. doi:10.1111/j.1460-9568.2011.07902.x. 
  13. ^ Cumming, Geoff. "ESCI (Exploratory Software for Confidence Intervals)". 
  14. ^ "Publication Manual of the American Psychological Association, Sixth Edition". Retrieved 17 May 2013. 
  15. ^ "Uniform Requirements for Manuscripts Submitted to Biomedical Journals". Retrieved 17 May 2013. 
  16. ^ a b Cohen, Jacob (1994). "The earth is round (p < .05).". American Psychologist 49: 997–1003. doi:10.1037/0003-066X.49.12.997. 
  17. ^ Ellis, Paul (2010). The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results. Cambridge: Cambridge University Press. 
  18. ^ Denton E. Morrison, Ramon E. Henkel, ed. (2006). The Significance Test Controversy: A Reader. Aldine Transaction. ISBN 978-0202308791. 
  19. ^ Cumming, Geoff. "Dance of the p values". 
  20. ^ Beyth-Marom, R; Fidler, F.; Cumming, G. (2008). "Statistical cognition: Towards evidence-based practice in statistics and statistics education". Statistics Education Research Journal 7: 20–39.