Statistical significance

From Wikipedia, the free encyclopedia
  (Redirected from Statistically significant)
Jump to: navigation, search

Statistical significance is the low probability of obtaining at least as extreme results given that the null hypothesis is true.[1][2][3][4][5][6][7] It is an integral part of statistical hypothesis testing where it helps investigators to decide if a null hypothesis can be rejected.[8][9] In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.[10][11] But if the probability of obtaining at least as extreme result (large difference between two or more sample means), given the null hypothesis is true, is less than a pre-determined threshold (e.g. 5% chance), then an investigator can conclude that the observed effect actually reflects the characteristics of the population rather than just sampling error.[8]

The present-day concept of statistical significance originated from Ronald Fisher when he developed statistical hypothesis testing in the early 20th century.[2][12][13] These tests are used to determine whether the outcome of a study would lead to a rejection of the null hypothesis based on a pre-specified low probability threshold called p-values, which can help an investigator to decide if a result contains sufficient information to cast doubt on the null hypothesis.[14]

P-values are often coupled to a significance or alpha (α) level, which is also set ahead of time, usually at 0.05 (5%).[14] Thus, if a p-value was found to be less than 0.05, then the result would be considered statistically significant and the null hypothesis would be rejected.[15] Other significance levels, such as 0.1 or 0.01, are also used, depending on the field of study.

In statistics, statistical significance is not the same as research, theoretical, or practical significance.[8][9][16]

History[edit]

Main article: History of statistics

The concept of statistical significance was originated by Ronald Fisher when he developed statistical hypothesis testing, which he described as "tests of significance", in his 1925 publication, Statistical Methods for Research Workers.[2][12][13] Fisher suggested a probability of one in twenty (0.05) as a convenient cutoff level to reject the null hypothesis.[17] In their 1933 paper, Jerzy Neyman and Egon Pearson recommended that the significance level (e.g. 0.05), which they called α, be set ahead of time, prior to any data collection.[17][18]

Despite his initial suggestion of 0.05 as a significance level, Fisher did not intend this cutoff value to be fixed, and in his 1956 publication Statistical methods and scientific inference he recommended that significant levels be set according to specific circumstances.[17]

Role in statistical hypothesis testing[edit]

In a two-tailed test, the rejection region or α level is partitioned to both ends of the sampling distribution and make up only 5% of the area under the curve.

Statistical significance plays a pivotal role in statistical hypothesis testing, where it is used to determine if a null hypothesis should be rejected or retained. A null hypothesis is the general or default statement that nothing happened or changed.[19] For a null hypothesis to be rejected as false, the result has to be identified as being statistically significant, i.e. unlikely to have occurred by chance alone.

To determine if a result is statistically significant, a researcher would have to calculate a p-value, which is the probability of observing an effect given that the null hypothesis is true.[7] The null hypothesis is rejected if the p-value is less than the significance or α level. The α level is the probability of rejecting the null hypothesis given that it is true (type I error) and is most often set at 0.05 (5%). If the α level is 0.05, then the conditional probability of a type I error, given that the null hypothesis is true, is 5%.[20] Then a statistically significant result is one in which the observed p-value is less than 5%, which is formally written as p < 0.05.[20]

If an observed p-value is not lower than the significance level, then rather than simply accepting the null hypothesis, where feasible it would often be appropriate to increase the sample size of the study, and see if the significance level is reached.[21]

If the α level is set at 0.05, it means that the rejection region comprises 5% of the sampling distribution.[22] This 5% can be allocated to one side of the sampling distribution as in a one-tailed test or partitioned to both sides of the distribution as in a two-tailed test, with each tail (or rejection region) containing 2.5% of the distribution. One-tailed tests are more powerful than two-tailed tests, as a null hypothesis can be rejected with a less extreme result.

Defining significance in terms of sigma (σ)[edit]

In specific fields such as particle physics and manufacturing, statistical significance is often expressed in multiples of the standard deviation or sigma (σ) of a normal distribution, with significance thresholds set at a much stricter level (e.g. 5σ).[23][24] For instance, the certainty of the Higgs boson particle's existence was based on the 5σ criterion, which corresponds to a p-value of about 1 in 3.5 million.[24][25]

Effect size[edit]

Main article: Effect size

Researchers focusing solely on whether their results are statistically significant might report findings that are not necessarily substantive.[26] To gauge the research significance of their result, researchers are also encouraged to report the effect size along with p-values (in cases where the effect being tested for is defined in terms of an effect size): the effect size quantifies the strength of an effect, such as the distance between two means or the correlation between two variables.[27]

See also[edit]

References[edit]

  1. ^ Redmond, Carol; Colton, Theodore (2001). "Clinical significance versus statistical significance". Biostatistics in Clinical Trials. Wiley Reference Series in Biostatistics (3rd ed.). West Sussex, United Kingdom: John Wiley & Sons Ltd. pp. 35–36. ISBN 0-471-82211-6. 
  2. ^ a b c Cumming, Geoff (2012). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. New York, USA: Routledge. pp. 27–28. 
  3. ^ Krzywinski, Martin; Altman, Naomi (30 October 2013). "Points of significance: Significance, P values and t-tests". Nature Methods (Nature Publishing Group) 10 (11): 1041–1042. doi:10.1038/nmeth.2698. Retrieved 3 July 2014. 
  4. ^ Sham, Pak C.; Purcell, Shaun M (17 April 2014). "Statistical power and significance testing in large-scale genetic studies". Nature Reviews Genetics (Nature Publishing Group) 15 (5): 335–346. doi:10.1038/nrg3706. Retrieved 3 July 2014. 
  5. ^ Johnson, Valen E. (October 9, 2013). "Revised standards for statistical evidence". Proceedings of the National Academy of Sciences (National Academies of Science). doi:10.1073/pnas.1313476110. Retrieved 3 July 2014. 
  6. ^ Altman, Douglas G. (1999). Practical Statistics for Medical Research. New York, USA: Chapman & Hall/CRC. p. 167. ISBN 978-0412276309. 
  7. ^ a b Devore, Jay L. (2011). Probability and Statistics for Engineering and the Sciences (8th ed.). Boston, MA: Cengage Learning. pp. 300–344. ISBN 0-538-73352-7. 
  8. ^ a b c Sirkin, R. Mark (2005). "Two-sample t tests". Statistics for the Social Sciences (3rd ed.). Thousand Oaks, CA: SAGE Publications, Inc. pp. 271–316. ISBN 1-412-90546-X. 
  9. ^ a b Borror, Connie M. (2009). "Statistical decision making". The Certified Quality Engineer Handbook (3rd ed.). Milwaukee, WI: ASQ Quality Press. pp. 418–472. ISBN 0-873-89745-5. 
  10. ^ Babbie, Earl R. (2013). "The logic of sampling". The Practice of Social Research (13th ed.). Belmont, CA: Cengage Learning. pp. 185–226. ISBN 1-133-04979-6. 
  11. ^ Faherty, Vincent (2008). "Probability and statistical significance". Compassionate Statistics: Applied Quantitative Analysis for Social Services (With exercises and instructions in SPSS) (1st ed.). Thousand Oaks, CA: SAGE Publications, Inc. pp. 127–138. ISBN 1-412-93982-8. 
  12. ^ a b Poletiek, Fenna H. (2001). "Formal theories of testing". Hypothesis-testing Behaviour. Essays in Cognitive Psychology (1st ed.). East Sussex, United Kingdom: Psychology Press. pp. 29–48. ISBN 1-841-69159-3. 
  13. ^ a b Fisher, Ronald A. (1925). Statistical Methods for Research Workers. Edinburgh, UK: Oliver and Boyd. p. 43. ISBN 0-050-02170-2. 
  14. ^ a b Schlotzhauer, Sandra (2007). Elementary Statistics Using JMP (SAS Press) (PAP/CDR ed.). Cary, NC: SAS Institute. pp. 166–169. ISBN 1-599-94375-1. 
  15. ^ McKillup, Steve (2006). "Probability helps you make a decision about your results". Statistics Explained: An Introductory Guide for Life Scientists (1st ed.). Cambridge, United Kingdom: Cambridge University Press. pp. 44–56. ISBN 0-521-54316-9. 
  16. ^ Myers, Jerome L.; Well, Arnold D.; Lorch Jr, Robert F. (2010). "The t distribution and its applications". Research Design and Statistical Analysis: Third Edition (3rd ed.). New York, NY: Routledge. pp. 124–153. ISBN 0-805-86431-8. 
  17. ^ a b c Quinn, Geoffrey R.; Keough, Michael J. (2002). Experimental Design and Data Analysis for Biologists (1st ed.). Cambridge, UK: Cambridge University Press. pp. 46–69. ISBN 0-521-00976-6. 
  18. ^ Neyman, J.; Pearson, E.S. (1933). "The testing of statistical hypotheses in relation to probabilities a priori". Mathematical Proceedings of the Cambridge Philosophical Society 29: 492–510. doi:10.1017/S030500410001152X. 
  19. ^ Meier, Kenneth J.; Brudney, Jeffrey L.; Bohte, John (2011). Applied Statistics for Public and Nonprofit Administration (3rd ed.). Boston, MA: Cengage Learning. pp. 189–209. ISBN 1-111-34280-6. 
  20. ^ a b Healy, Joseph F. (2009). The Essentials of Statistics: A Tool for Social Research (2nd ed.). Belmont, CA: Cengage Learning. pp. 177–205. ISBN 0-495-60143-8. 
  21. ^ Cohen, Barry H. (2008). Explaining Psychological Statistics (3rd ed.). Hoboken, NJ: John Wiley and Sons. pp. 46–83. ISBN 0-470-00718-4. 
  22. ^ Health, David (1995). An Introduction To Experimental Design And Statistics For Biology (1st ed.). Boston, MA: CRC press. pp. 123–154. ISBN 1-857-28132-2. 
  23. ^ Vaughan, Simon (2013). Scientific Inference: Learning from Data (1st ed.). Cambridge, UK: Cambridge University Press. pp. 146–152. ISBN 1-107-02482-X. 
  24. ^ a b Bracken, Michael B. (2013). Risk, Chance, and Causation: Investigating the Origins and Treatment of Disease (1st ed.). New Haven, CT: Yale University Press. pp. 260–276. ISBN 0-300-18884-6. 
  25. ^ Franklin, Allan (2013). "Prologue: The rise of the sigmas". Shifting Standards: Experiments in Particle Physics in the Twentieth Century (1st ed.). Pittsburgh, PA: University of Pittsburgh Press. pp. Ii–Iii. ISBN 0-822-94430-8. 
  26. ^ Carver, Ronald P. (1978). "The Case Against Statistical Significance Testing". Harvard Educational Review 48: 378–399. 
  27. ^ Pedhazur, Elazar J.; Schmelkin, Liora P. (1991). Measurement, Design, and Analysis: An Integrated Approach (Student ed.). New York, NY: Psychology Press. pp. 180–210. ISBN 0-805-81063-3. 

Further reading[edit]

External links[edit]