Testing hypotheses suggested by the data
This article needs attention from an expert in statistics.February 2019)(
This article needs additional citations for verification. (January 2008) (Learn how and when to remove this template message)
In statistics, hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is because circular reasoning (double dipping) would be involved: something seems true in the limited data set, therefore we hypothesize that it is true in general, therefore we (wrongly) test it on the same limited data set, which seems to confirm that it is true. Generating hypotheses based on data already observed, in the absence of testing them on new data, is referred to as post hoc theorizing (from Latin post hoc, "after this").
The correct procedure is to test any hypothesis on a data set that was not used to generate the hypothesis.
Example of fallacious acceptance of a hypothesis
Suppose fifty different researchers run clinical trials to test whether Vitamin X is efficacious in treating cancer. The vast majority of them find no significant differences between measurements done on patients who have taken Vitamin X and those who have taken a placebo. However, due to statistical noise, one study finds a significant correlation between taking Vitamin X and being cured from cancer.
Taking into account all 50 studies as a whole, the only conclusion that could be made with great certainty is that there remains no evidence that Vitamin X has any effect on treating cancer. However, someone trying to achieve greater publicity for the one outlier study could try to create a hypothesis suggested by the data, by finding some aspect unique to that one study, and claiming that this aspect is the key to its differing results. Suppose, for instance, that this study was the only one conducted in Denmark. It could be claimed that this set of 50 studies shows that Vitamin X is more efficacious in Denmark than elsewhere. However, while the data do not contradict this hypothesis, they do not strongly support it either. Only one or more additional studies could bolster this additional hypothesis.
The general problem
Testing a hypothesis suggested by the data can very easily result in false positives (type I errors). If one looks long enough and in enough different places, eventually data can be found to support any hypothesis. Yet, these positive data do not by themselves constitute evidence that the hypothesis is correct. The negative test data that were thrown out are just as important, because they give one an idea of how common the positive results are compared to chance. Running an experiment, seeing a pattern in the data, proposing a hypothesis from that pattern, then using the same experimental data as evidence for the new hypothesis is extremely suspect, because data from all other experiments, completed or potential, has essentially been "thrown out" by choosing to look only at the experiments that suggested the new hypothesis in the first place.
A large set of tests as described above greatly inflates the probability of type I error as all but the data most favorable to the hypothesis is discarded. This is a risk, not only in hypothesis testing but in all statistical inference as it is often problematic to accurately describe the process that has been followed in searching and discarding data. In other words, one wants to keep all data (regardless of whether they tend to support or refute the hypothesis) from "good tests", but it is sometimes difficult to figure out what a "good test" is. It is a particular problem in statistical modelling, where many different models are rejected by trial and error before publishing a result (see also overfitting, publication bias).
The error is particularly prevalent in data mining and machine learning. It also commonly occurs in academic publishing where only reports of positive, rather than negative, results tend to be accepted, resulting in the effect known as publication bias.
All strategies for sound testing of hypotheses suggested by the data involve including a wider range of tests in an attempt to validate or refute the new hypothesis. These include:
- Collecting confirmation samples
- Methods of compensation for multiple comparisons
- Simulation studies including adequate representation of the multiple-testing actually involved
Henry Scheffé's simultaneous test of all contrasts in multiple comparison problems is the most well-known remedy in the case of analysis of variance. It is a method designed for testing hypotheses suggested by the data while avoiding the fallacy described above.
- Bonferroni correction
- Data analysis
- Data dredging
- Exploratory data analysis
- Post-hoc analysis
- Predictive analytics
- Texas sharpshooter fallacy
- Type I and type II errors
- Uncomfortable science