In statistical practice, estimation is a data analysis framework that uses a combination of effect sizes, confidence intervals, precision planning and meta-analysis to plan experiments, analyze data and interpret results. Estimation statistics is distinct from estimation theory both in mathematical focus and scientific uses: estimation theory is widely used in signal processing, while estimation statistics are used in psychology, medical research, life sciences and a wide range of other experimental sciences.
In frequentist methodologies, it is distinct from null hypothesis significance testing and the two methods are considered by some statisticians to be in competition. Proponents argue that estimation is much more usefully informative.
The primary aim of estimation methods is to estimate the size of an effect and report an effect size along with its confidence intervals, the latter of which is related to the precision of the estimate. Estimation at its core involves analyzing data to obtain a point estimate (an effect size calculated from data used as estimate of the population effect size) and an interval estimate that summarizes a range of likely values of the underlying population effect. Proponents of estimation see reporting a p-value as either a secondary task or an unhelpful distraction from the important business of reporting an effect size with its confidence intervals.
Despite estimation statistics having been recommended as preferred over significance testing by biomedical statistics textbooks for several decades, much of biomedical research still relies on hypothesis testing.
Estimation statistics in the modern era started with the development of the standardized effect size by Jacob Cohen in the 1960s. Research synthesis using estimation statistics was pioneered by Gene V. Glass with the development of the method of meta-analysis in the 1970s. Estimation methods have been refined since by Larry Hedges, Michael Borenstein, Doug Altman, Martin Gardner, Geoff Cumming and others. The systematic review, in conjunction with meta-analysis, is a related technique with widespread use in medical research. There are now over 60,000 citations to "meta-analysis" in PubMed. Despite the widespread adoption of meta-analysis, the estimation framework is still not routinely used in primary biomedical research.
The Publication Manual of the American Psychological Association recommends estimation over hypothesis testing. The Uniform Requirements for Manuscripts Submitted to Biomedical Journals document makes a similar recommendation: "Avoid relying solely on statistical hypothesis testing, such as P values, which fail to convey important information about effect size."
Flaws in significance testing
In significance testing, the primary objective of statistical calculations is to obtain a p-value, the probability of seeing an obtained result, or a more extreme result, when assuming the null hypothesis is true. If the p-value is low (usually < 0.05), the statistical practitioner is then encouraged to reject the null hypothesis. Proponents of estimation reject the validity of significance testing for the following reasons, among others:
- The null hypothesis is always wrong; there is always some effect, even if it is minuscule.
- Significance testing produces arbitrarily dichotomous yes-no answers, while discarding important information about magnitude.
- Any particular p-value arises through the interaction of the effect size, the sample size (all things being equal a larger sample size produces a smaller p-value) and sampling error.
- At low power, simulation reveals that sampling error makes p-values extremely volatile.
Advantages of confidence intervals
Confidence intervals behave in a predictable way. By definition, 95% confidence intervals have a 95% chance of capturing the underlying population mean (μ). This feature remains constant with increasing sample size; what changes is that the interval becomes smaller (more precise). In addition, 95% confidence intervals are also 83% prediction intervals: one experiment's confidence intervals have an 83% chance of capturing any other future experiment's mean. As such, knowing a single experiment's 95% confidence intervals give the analyst a plausible range for the population mean, and plausible outcomes of any subsequent replication experiments.
Evidence based statistics
Psychological studies of the perception of statistics reveal that reporting interval estimates leaves a more accurate perception of the data than reporting p-values.
The precision of an estimate is formally defined as 1/variance, and like power, increases (improves) with increasing sample size. Like power, a high level of precision is expensive; research grant applications would ideally include precision/cost analyses. Proponents of estimation believe precision planning should replace power since statistical power itself is conceptually linked to significance testing.
- Ellis, Paul. "Effect size FAQ".
- Cohen, Jacob. "The earth is round (p<.05)".
- Cumming, Geoff (2012). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. New York: Routledge.
- Cohen, Jacob (1990). "What I have Learned (So Far)". American Psychologist 45 (12): 1304.
- Ellis, Paul. "Why can’t I just judge my result by looking at the p value?". Retrieved 5 June 2013.
- Altman, Douglas (1991). Practical Statistics For Medical Research. London: Chapman and Hall.
- Douglas Altman, ed. (2000). Statistics with Confidence. London: Wiley-Blackwell.
- Hedges, Larry (1987). "How hard is hard science, how soft is soft science". American Psychologist 42: 443.
- Hunt, Morton (1997). How science takes stock: the story of meta-analysis. New York: The Russell Sage Foundation. ISBN ISBN 0-87154-398-2.
- Fidler, Fiona. "Editors Can Lead Researchers to Confidence Intervals, but Can't Make Them Think".
- Hentschke, Harald; Maik C. Stüttgen (December 2011). "Computation of measures of effect size for neuroscience data sets". European Journal of Neuroscience 34 (12): 1887–1894.
- Cumming, Geoff. "ESCI (Exploratory Software for Confidence Intervals)".
- "Publication Manual of the American Psychological Association, Sixth Edition". Retrieved 17 May 2013.
- "Uniform Requirements for Manuscripts Submitted to Biomedical Journals". Retrieved 17 May 2013.
- Cohen, Jacob (1994). "The earth is round (p < .05).". American Psychologist 49: 997–1003.
- Ellis, Paul (2010). The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results. Cambridge: Cambridge University Press.
- Denton E. Morrison, Ramon E. Henkel, ed. (2006). The Significance Test Controversy: A Reader. Aldine Transaction. ISBN 978-0202308791.
- Cumming, Geoff. "Dance of the p values".
- Beyth-Marom, R; Fidler, F., Cumming, G. (2008). "Statistical cognition: Towards evidence-based practice in statistics and statistics education". Statistics Education Research Journal 7: 20–39.
|Wikiversity has learning materials about Estimation statistics|