Philosophy of statistics
Appearance
This article includes a list of general references, but it lacks sufficient corresponding inline citations. (November 2010) |
The philosophy of statistics involves the meaning, justification, utility, use and abuse of statistics and its methodology, and ethical and epistemological issues involved in the consideration of choice and interpretation of data and methods of statistics.
- Foundations of statistics involves issues in theoretical statistics, its goals and optimization methods to meet these goals, parametric assumptions or lack thereof considered in nonparametric statistics, model selection for the underlying probability distribution, and interpretation of the meaning of inferences made using statistics, related to the philosophy of probability and the philosophy of science. Discussion of the selection of the goals and the meaning of optimization, in foundations of statistics, are the subject of the philosophy of statistics. Selection of distribution models, and of the means of selection, is the subject of the philosophy of statistics, whereas the mathematics of optimization is the subject of nonparametric statistics.
- David Cox makes the point [citation needed] that any kind of interpretation of evidence is in fact a statistical model, although it is known through Ian Hacking's work [citation needed] that many are ignorant of this subtlety.
- Issues arise involving sample size, such as cost and efficiency, are common, such as in polling and pharmaceutical research.
- Extra-mathematical considerations in the design of experiments and accommodating these issues arise in most actual experiments.[further explanation needed]
- The motivation and justification of data analysis and experimental design, as part of the scientific method are considered.
- Distinctions between induction and logical deduction relevant to inferences from data and evidence arise, such as when frequentist interpretations are compared with degrees of certainty derived from Bayesian inference. However, the difference between induction and ordinary reasoning is not generally appreciated.[1]
- Leo Breiman exposed the diversity of thinking in his article on 'The Two Cultures', making the point that statistics has several kinds of inference to make, modelling and prediction amongst them.[2]
- Issues in the philosophy of statistics arise throughout the history of statistics. Causality considerations arise with interpretations of, and definitions of, correlation, and in the theory of measurement.
- Objectivity in statistics is often confused with truth whereas it is better understood as replicability, which then needs to be defined in the particular case. Theodore Porter develops this as being the path pursued when trust has evaporated, being replaced with criteria.[3]
- Ethics associated with epistemology and medical applications arise from potential abuse of statistics, such as selection of method or transformations of the data to arrive at different probability conclusions for the same data set. For example, the meaning of applications of a statistical inference to a single person, such as one single cancer patient, when there is no frequentist interpretation for that patient to adopt.
- Campaigns for statistical literacy must wrestle with the problem that most interesting questions around individual risk are very difficult to determine or interpret, even with the computer power currently available.
Notes
Further reading
- Breiman, Leo (2001). "Statistical Modeling: The Two Cultures". Statistical Science. 16 (3): 199–231. doi:10.1214/ss/1009213726.
{{cite journal}}
: Invalid|ref=harv
(help) - Efron, Bradley; Morris, Carl (1977). "Stein's Paradox in Statistics" (PDF). Scientific American. Vol. 236, no. 5. pp. 119–127.
- Efron, Bradley (1979). "Computer and the theory of statistics: thinking the unthinkable". SIAM Review. 21 (4): 460–480. doi:10.1137/1021092.
- Good, Irving J. (1988). "The Interface Between Statistics and Philosophy of Science". Statistical Science. 3 (4): 386–397. doi:10.1214/ss/1177012754. JSTOR 2245388.
- Hacking, Ian (2006). The Emergence of Probability (2nd ed.). Cambridge University Press. ISBN 0-521-68557-5.
{{cite book}}
: Invalid|ref=harv
(help) - Hacking, Ian (1964). "On the Foundations of Statistics". The British Journal for the Philosophy of Science. 15 (57): 1–26. doi:10.1093/bjps/xv.57.1. JSTOR 685624.
- Hacking, Ian (1990). The Taming of Chance. Cambridge University Press. ISBN 0-521-38884-8.
- Mayo, Deborah (1996). Error and the growth of experimental knowledge. University of Chicago Press. ISBN 0-226-51198-7.
- Porter, Theodore (1995). Trust in Numbers. Princeton University Press. ISBN 0-691-03776-0.
{{cite book}}
: Invalid|ref=harv
(help) - Savage, Leonard J. (1972). The Foundations of Statistics (2003 ed.). Dover. ISBN 0-486-62349-1.
- Vallverdu, Jordi (2016). Bayesians Versus Frequentists. A Philosophical Debate on Statistical Reasoning. Springer. ISBN 978-3-662-48638-2.