# Misuse of statistics

A misuse of statistics occurs when a statistical argument asserts a falsehood. In some cases, the misuse may be accidental. In others, it is purposeful and for the gain of the perpetrator. When the statistical reason involved is false or misapplied, this constitutes a statistical fallacy.

The false statistics trap can be quite damaging to the quest for knowledge. For example, in medical science, correcting a falsehood may take decades and cost lives.

Misuses can be easy to fall into. Professional scientists, even mathematicians and professional statisticians, can be fooled by even some simple methods, even if they are careful to check everything. Scientists have been known to fool themselves with statistics due to lack of knowledge of probability theory and lack of standardization of their tests.

## Importance

Statistics may be a principled means of debate with opportunities for agreement,[1][2] but this is true only if the parties agree to a set of rules. Misuses of statistics violate the rules.

## Types of misuse

All a company has to do to promote a neutral (useless) product is to find or conduct, for example, 40 studies with a confidence level of 95%. If the product is really useless, this would on average produce one study showing the product was beneficial, one study showing it was harmful and thirty-eight inconclusive studies (38 is 95% of 40). This tactic becomes more effective the more studies there are available. Organizations that do not publish every study they carry out, such as tobacco companies denying a link between smoking and cancer, anti-smoking advocacy groups and media outlets trying to prove a link between smoking and various ailments, or miracle pill vendors, are likely to use this tactic.

Another common technique is to perform a study that tests a large number of dependent (response) variables at the same time. For example, a study testing the effect of a medical treatment might use as dependent variables the probability of survival, the average number of days spent in the hospital, the patient's self-reported level of pain, etc. This also increases the likelihood that at least one of the variables will by chance show a correlation with the independent (explanatory) variable.

Ronald Fisher considered this issue in his famous Lady tasting tea example experiment (from his 1935 book, The Design of Experiments). Regarding repeated experiments he said, "It would clearly be illegitimate, and would rob our calculation of its basis, if unsuccessful results were not all brought into the account."

The answers to surveys can often be manipulated by wording the question in such a way as to induce a prevalence towards a certain answer from the respondent. For example, in polling support for a war, the questions:

• Do you support the attempt by the USA to bring freedom and democracy to other places in the world?
• Do you support the unprovoked military action by the USA?

will likely result in data skewed in different directions, although they are both polling about the support for the war. A better way of wording the question could be "Do you support the current US military action abroad?"

Another way to do this is to precede the question by information that supports the "desired" answer. For example, more people will likely answer "yes" to the question "Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?" than to the question "Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?"

The proper formulation of questions can be very subtle. The responses to two questions can vary dramatically depending on the order in which they are asked.[3] (p 102) "A survey that asked about 'ownership of stock' found that most Texas ranchers owned stock, though probably not the kind traded on the New York Stock Exchange."[4] (p 59)

### Overgeneralization

Overgeneralization is a fallacy occurring when a statistic about a particular population is asserted to hold among members of a group for which the original population is not a representative sample.

For example, suppose 100% of apples are observed to be red in summer. The assertion "All apples are red" would be an instance of overgeneralization because the original statistic was true only of a specific subset of apples (those in summer), which is not expected to representative of the population of apples as a whole.

A real-world example of the overgeneralization fallacy can be observed as an artifact of modern polling techniques, which prohibit calling cell phones for over-the-phone political polls. As young people are more likely than other demographic groups to lack a conventional "landline" phone, a telephone poll that exclusively surveys responders of calls landline phones, may cause the poll results to undersample the views of young people, if no other measures are taken to account for this skewing of the sampling.

Thus, a poll examining the voting preferences of young people using this technique may not be a perfectly accurate representation of young peoples' true voting preferences as a whole without overgeneralizing, because the sample used excludes young people that carry only cell phones, who may or may not have voting preferences that differ from the rest of the population.

Overgeneralization often occurs when information is passed through nontechnical sources, in particular mass media.

### Biased samples

Scientists have learned at great cost that gathering good experimental data for statistical analysis is difficult. Example: The placebo effect (mind over body) is very powerful. 100% of subjects developed a rash when exposed to an inert substance that was falsely called poison ivy while few developed a rash to a "harmless" object that really was poison ivy.[4] (p 97) Researchers combat this effect by double-blind randomized comparative experiments. Statisticians typically worry more about the validity of the data than the analysis. This is reflected in a field of study in within statistics known as the design of experiments.

### Misreporting or misunderstanding of estimated error

If a research team wants to know how 300 million people feel about a certain topic, it would be impractical to ask all of them. However, if the team picks a random sample of about 1000 people, they can be fairly certain that the results given by this group are representative of what the larger group would have said if they had all been asked.

This confidence can actually be quantified by the central limit theorem and other mathematical results. Confidence is expressed as a probability of the true result (for the larger group) being within a certain range of the estimate (the figure for the smaller group). This is the "plus or minus" figure often quoted for statistical surveys. The probability part of the confidence level is usually not mentioned; if so, it is assumed to be a standard number like 95%.

The two numbers are related. If a survey has an estimated error of ±5% at 95% confidence, it also has an estimated error of ±6.6% at 99% confidence. ±$x$% at 95% confidence is always ±$1.32x$% at 99% confidence for a normally distributed population.

The smaller the estimated error, the larger the required sample, at a given confidence level.

at 95.4% confidence:

±1% would require 10,000 people.
±2% would require 2,500 people.
±3% would require 1,111 people.
±4% would require 625 people.
±5% would require 400 people.
±10% would require 100 people.
±20% would require 25 people.
±25% would require 16 people.
±50% would require 4 people.

People may assume, because the confidence figure is omitted, that there is a 100% certainty that the true result is within the estimated error. This is not mathematically correct.

Many people may not realize that the randomness of the sample is very important. In practice, many opinion polls are conducted by phone, which distorts the sample in several ways, including exclusion of people who do not have phones, favoring the inclusion of people who have more than one phone, favoring the inclusion of people who are willing to participate in a phone survey over those who refuse, etc. Non-random sampling makes the estimated error unreliable.

On the other hand, people may consider that statistics are inherently unreliable because not everybody is called, or because they themselves are never polled. People may think that it is impossible to get data on the opinion of dozens of millions of people by just polling a few thousands. This is also inaccurate[citation needed]. A poll with perfect unbiased sampling and truthful answers has a mathematically determined margin of error, which only depends on the number of people polled.

However, often only one margin of error is reported for a survey. When results are reported for population subgroups, a larger margin of error will apply, but this may not be made clear. For example, a survey of 1000 people may contain 100 people from a certain ethnic or economic group. The results focusing on that group will be much less reliable than results for the full population. If the margin of error for the full sample was 4%, say, then the margin of error for such a subgroup could be around 13%.

There are also many other measurement problems in population surveys.

The problems mentioned above apply to all statistical experiments, not just population surveys.

### False causality

When a statistical test shows a correlation between A and B, there are usually six possibilities:

1. A causes B.
2. B causes A.
3. A and B both partly cause each other.
4. A and B are both caused by a third factor, C.
5. B is caused by C which is correlated to A.
6. The observed correlation was due purely to chance.

The sixth possibility can be quantified by statistical tests that can calculate the probability that the correlation observed would be as large as it is just by chance if, in fact, there is no relationship between the variables. However, even if that possibility has a small probability, there are still the four others.

If the number of people buying ice cream at the beach is statistically related to the number of people who drown at the beach, then nobody would claim ice cream causes drowning because it's obvious that it isn't so. (In this case, both drowning and ice cream buying are clearly related by a third factor: the number of people at the beach).

This fallacy can be used, for example, to prove that exposure to a chemical causes cancer. Replace "number of people buying ice cream" with "number of people exposed to chemical X", and "number of people who drown" with "number of people who get cancer", and many people will believe you. In such a situation, there may be a statistical correlation even if there is no real effect. For example, if there is a perception that a chemical site is "dangerous" (even if it really isn't) property values in the area will decrease, which will entice more low-income families to move to that area. If low-income families are more likely to get cancer than high-income families (this can happen for many reasons, such as a poorer diet or less access to medical care) then rates of cancer will go up, even though the chemical itself is not dangerous. It is believed[5] that this is exactly what happened with some of the early studies showing a link between EMF (electromagnetic fields) from power lines and cancer.[6]

In well-designed studies, the effect of false causality can be eliminated by assigning some people into a "treatment group" and some people into a "control group" at random, and giving the treatment group the treatment and not giving the control group the treatment. In the above example, a researcher might expose one group of people to chemical X and leave a second group unexposed. If the first group had higher cancer rates, the researcher knows that there is no third factor that affected whether a person was exposed because he controlled who was exposed or not, and he assigned people to the exposed and non-exposed groups at random. However, in many applications, actually doing an experiment in this way is either prohibitively expensive, infeasible, unethical, illegal, or downright impossible. For example, it is highly unlikely that an IRB would accept an experiment that involved intentionally exposing people to a dangerous substance in order to test its toxicity. The obvious ethical implications of such types of experiments limit researchers' ability to empirically test causation.

### Proof of the null hypothesis

In a statistical test, the null hypothesis ($H_0$) is considered valid until enough data proves it wrong. Then $H_0$ is rejected and the alternative hypothesis ($H_A$) is considered to be proven as correct. By chance this can happen, although $H_0$ is true, with a probability denoted alpha, the significance level. This can be compared to the judicial process, where the accused is considered innocent ($H_0$) until proven guilty ($H_A$) beyond reasonable doubt (alpha).

But if data does not give us enough proof to reject $H_0$, this does not automatically prove that $H_0$ is correct. If, for example, a tobacco producer wishes to demonstrate that its products are safe, it can easily conduct a test with a small sample of smokers versus a small sample of non-smokers. It is unlikely that any of them will develop lung cancer (and even if they do, the difference between the groups has to be very big in order to reject $H_0$). Therefore it is likely—even when smoking is dangerous—that our test will not reject $H_0$. If $H_0$ is accepted, it does not automatically follow that smoking is proven harmless. The test has insufficient power to reject $H_0$, so the test is useless and the value of the "proof" of $H_0$ is also null.

This can—using the judicial analogue above—be compared with the truly guilty defendant who is released just because the proof is not enough for a guilty verdict. This does not prove the defendant's innocence, but only that there is not proof enough for a guilty verdict.

"...the null hypothesis is never proved or established, but it is possibly disproved, in the course of experimentation. Every experiment may be said to exist only in order to give the facts a chance of disproving the null hypothesis." (Fisher in The Design of Experiments) Multiple reasons for confusion exist. The double negative of rejecting the null hypothesis to support the alternate (research) hypothesis is counter-intuitive. Additional confusion is a result of historical turmoil within statistics. Texts have merged Fisher's "significance testing" (where the null hypothesis is never accepted) with "hypothesis testing" (where some hypothesis is always accepted) to the lasting confusion of many. The founding fathers of classical inferential statistics vigorously disagreed over the two types of statistical tests. The disagreement was never settled; Fisher died after a generation of dispute. For mathematical reasons significance tests are often considered a special case of hypothesis tests, but philosophical differences remain. The Bayesian school of statistics claims that significance testing is wholly flawed philosophically and mathematically. Statistics classes do not have the time to dwell on the underlying terminology and logic. Many taking the class have weak mathematical backgrounds. Probability is not intuitive.

### Data dredging

Data dredging is an abuse of data mining. In data dredging, large compilations of data are examined in order to find a correlation, without any pre-defined choice of a hypothesis to be tested. Since the required confidence interval to establish a relationship between two parameters is usually chosen to be 95% (meaning that there is a 95% chance that the relationship observed is not due to random chance), there is a thus a 5% chance of finding a correlation between any two sets of completely random variables. Given that data dredging efforts typically examine large datasets with many variables, and hence even larger numbers of pairs of variables, spurious but apparently statistically significant results are almost certain to be found by any such study.

Note that data dredging is a valid way of finding a possible hypothesis but that hypothesis must then be tested with data not used in the original dredging. The misuse comes in when that hypothesis is stated as fact without further validation.

"You cannot legitimately test a hypothesis on the same data that first suggested that hypothesis. The remedy is clear. Once you have a hypothesis, design a study to search specifically for the effect you now think is there. If the result of this test is statistically significant, you have real evidence at last."[7] (p 466)

### Data manipulation

Informally called "fudging the data," this practice includes selective reporting (see also publication bias) and even simply making up false data.

Examples of selective reporting abound. The easiest and most common examples involve choosing a group of results that follow a pattern consistent with the preferred hypothesis while ignoring other results or "data runs" that contradict the hypothesis.

Psychic researchers have long disputed studies showing people with ESP ability. Critics accuse ESP proponents of only publishing experiments with positive results and shelving those that show negative results. A "positive result" is a test run (or data run) in which the subject guesses a hidden card, etc., at a much higher frequency than random chance.[citation needed]

Scientists, in general, question the validity of study results that cannot be reproduced by other investigators. However, some scientists refuse to publish their data and methods.[8]

Data manipulation is a serious issue/consideration in the most honest of statistical analyses. Concerns: Outliers, missing data, non-normality. All can adversely affect the validity of statistical analysis. It is appropriate to study the data and repair real problems before analysis begins. "[I]n any scatter diagram there will be some points more or less detached from the main part of the cloud: these points should be rejected only for cause."[9]

### Non-enduring class fallacies

This type of fallacy involves the claim or implication that members of a statistical class persist over time when this is in fact not the case. The claims as applied to that statistical class may indeed be statistically correct, however the fallacy lies in the implication that the statistical class is composed of the same individuals from one point in time to the next.

For example, the claim by congressman Bernie Sanders in 2011 that "the top 1% of all income earners in the USA made 23.5% of all income", while being statistically correct, may still be fallacious due to the implication that this class composed of the top 1% is an enduring statistical class composed of the same individuals as in the previous year. While it may be true that many of the individuals in this class persist from the previous year, no indication of how many of these individuals do in fact persist was given in the original statement, and this led to the fallacious implication that all individuals in the class endured.

This fallacy can easily be avoided by specifying whether the statistics used refer to the same group of individuals over the period in question. When this precaution is not taken then suspicions of this fallacy may be raised, even if the fallacy has in fact not been committed.

### Other fallacies

Pseudoreplication is a technical error associated with Analysis of variance. Complexity hides the fact that statistical analysis is being attempted on a single sample (N=1). For this degenerate case the variance cannot be calculated (division by zero).

The gambler's fallacy assumes that an event for which a future likelihood can be measured had the same likelihood of happening once it has already occurred. Thus, if someone had already tossed 9 coins and each has come up heads, people tend to assume that the likelihood of a tenth toss also being heads is 1023 to 1 against (which it was before the first coin was tossed) when in fact the chance of the tenth head is 50% (assuming the coin is unbiased).

The prosecutor's fallacy[10] (pp 203-205 and Appendix C) has led, in the UK, to the false imprisonment of women for murder when the courts were given the prior statistical likelihood of a woman's 3 children dying from Sudden Infant Death Syndrome as being the chances that their already dead children died from the syndrome. This led to statements from Roy Meadow that the chance they had died of Sudden Infant Death Syndrome were extremely small (one in millions). The courts then handed down convictions in spite of the statistical inevitability that a few women would suffer this tragedy. The convictions were eventually overturned (and Meadow was subsequently struck off the U.K. Medical Register for giving “erroneous” and “misleading” evidence, although this was later reversed by the courts).[11] Meadow's calculations were irrelevant to these cases, but even if they were, using the same methods of calculation would have shown that the odds against two cases of infanticide were even smaller (one in billions).[11]

The Ludic fallacy. Probabilities are based on simple models that ignore real (if remote) possibilities. Poker players do not consider that an opponent may draw a gun rather than a card. The insured (and governments) assume that insurers will remain solvent, but see AIG and systemic risk.

## Notes

1. ^ Abelson, Robert P. (1995). Statistics as Principled Argument. Lawrence Erlbaum Associates. ISBN 0-8058-0528-1. "... the purpose of statistics is to organize a useful argument from quantitative evidence, using a form of principled rhetoric."
2. ^ Porter, Theodore (1995). Trust in numbers : the pursuit of objectivity in science and public life. Princeton, N.J: Princeton University Press. ISBN 0-691-03776-0. Porter considered the history of cost-benefit analysis. While this is perhaps more economical than statistical, it is a quantitative decision-making technique considered to be in the statistical domain.
3. ^ Kahneman, Daniel (2013). Thinking, fast and slow. New York: Farrar, Straus and Giroux. ISBN 9780374533557.
4. ^ a b Moore, David; William I. Notz (2006). Statistics : concepts and controversies (6th ed.). New York: W.H. Freeman. ISBN 9780716786368.
5. ^ http://www.quackwatch.org/01QuackeryRelatedTopics/emf.html
7. ^ Moore, David; George P. McCabe (2003). Introduction to the practice of statistics (4th ed.). New York: W.H. Freeman and Co. ISBN 0716796570.
8. ^ http://www.researchinformation.info/features/feature.php?feature_id=214
9. ^ Freedman, David; Robert Pisani and Roger Purves (1998). Statistics (3rd ed.). New York: W.W. Norton. ISBN 0-393-97083-3.
10. ^ Seife, Charles (2011). Proofiness : how you're being fooled by the numbers. New York: Penguin. ISBN 9780143120070. Discusses the notorious British case.
11. ^ a b Michael Kaplan and Ellen Kaplan, Chances Are (Adventures in Probability), Viking Penguin, 2006, pp. 192-5. ISBN 978-0143038344

## References

• Christensen, R. and T. Reichert, (1976) "Unit Measure Violations in Pattern Recognition, Ambiguity and Irrelevancy," Pattern Recognition, 4, 239–245 doi:10.1016/0031-3203(76)90044-3
• Hooke, R. (1983) How to tell the liars from the statisticians; Marcel Dekker, Inc., New York, NY.
• Jaffe, A.J. and H.F. Spirer (1987) Misused Statistics; Marcel Dekker, Inc., New York, NY.
• Campbell, S.K. (1974), Flaws and Fallacies in Statistical Thinking; Prentice Hall, Inc., Englewood Cliffs, NJ.
• Oldberg, T. (2005) "An Ethical Problem in the Statistics of Defect Detection Test Reliability," Speech to the Golden Gate Chapter of the American Society for Nondestructive Testing. Published on the Web by ndt.net at http://www.ndt.net/article/v10n05/oldberg/oldberg.htm.
• Oldberg, T. and R. Christensen (1995) "Erratic Measure" in NDE for the Energy Industry 1995, The American Society of Mechanical Engineers. ISBN 0-7918-1298-7 (pages 1–6) Republished on the Web by ndt.net
• Ercan I, Yazici B, Yang Y, Ozkaya G, Cangur S, Ediz B, Kan I (2007) "Misusage of Statistics in Medical Researches", European Journal of General Medicine, 4 (3),127–133
• Ercan I, Yazici B, Ocakoglu G, Sigirli D, Kan I Review of Reliability and Factors Affecting the Reliability, InterStat, 2007 April, 8
• Stone, M. (2009) Failing to Figure: Whitehall's Costly Neglect of Statistical Reasoning, Civitas, London. ISBN 1-906837-07-4