Circular analysis
In statistics, circular analysis is the selection of the details of a data analysis using the data that is being analysed. It is often referred to as double dipping, as one uses the same data twice. Circular analysis unjustifiably inflates the apparent statistical strength of any results reported and, at the most extreme, can lead to the apparently significant result being found in data that consists only of noise. In particular, where an experiment is implemented to study a postulated effect, it is a misuse of statistics to initially reduce the complete dataset by selecting a subset of data in ways that are aligned to the effects being studied. A second misuse occurs where the performance of a fitted model or classification rule is reported as a raw result, without allowing for the effects of model-selection and the tuning of parameters based on the data being analyzed.
Examples
At its most simple, it can include the decision to remove outliers, after noticing this might help improve the analysis of an experiment. The effect can be more subtle. In functional magnetic resonance imaging (fMRI) data, for example, considerable amounts of pre-processing is often needed. These might be applied incrementally until the analysis 'works'. Similarly, the classifiers used in a multivoxel pattern analysis of fMRI data require parameters, which could be tuned to maximise the classification accuracy.
In geology, the potential for circular analysis has been noted[1] in the case of maps of geological faults, where these may be drawn on the basis of an assumption that faults develop and propagate in a particular way, with those maps being later used as evidence that faults do actually develop in that way.
Solutions
Careful design of the analysis one plans to perform, prior to collecting the data, means the analysis choice is not affected by the data collected. Alternatively, one might decide to perfect the classification on one or two participants, and then use the analysis on the remaining participant data. Regarding the selection of classification parameters, a common method is to divide the data into two sets, and find the optimum parameter using one set and then test using this parameter value on the second set. This is a standard technique[citation needed] used (for example) by the princeton MVPA classification library.[citation needed]
Notes
- ^ Scott, D. L.; Braun, J.; Etheridge, M. A. (1994). "Dip analysis as a tool for estimating regional kinematics in extensional terranes". Journal of Structural Geology. 16 (3): 393. doi:10.1016/0191-8141(94)90043-4.
References
- Kriegeskorte, N.; Simmons, W. K.; Bellgowan, P. S. F.; Baker, C. I. (2009). "Circular analysis in systems neuroscience: The dangers of double dipping". Nature Neuroscience. 12 (5): 535–540. doi:10.1038/nn.2303. PMC 2841687. PMID 19396166.
- Kriegeskorte, N.; Lindquist, M. A.; Nichols, T. E.; Poldrack, R. A.; Vul, E. (2010). "Everything you never wanted to know about circular analysis, but were afraid to ask". Journal of Cerebral Blood Flow & Metabolism. 30 (9): 1551. doi:10.1038/jcbfm.2010.86.
- Tolstrup, N.; Rouzé, P.; Brunak, S. (1997). "A branch point consensus from Arabidopsis found by non-circular analysis allows for better prediction of acceptor sites". Nucleic Acids Research. 25 (15): 3159–3163. doi:10.1093/nar/25.15.3159. PMC 146848. PMID 9224618.
- Olivetti, E.; Mognon, A.; Greiner, S.; Avesani, P. (2010). "Brain Decoding: Biases in Error Estimation". 2010 First Workshop on Brain Decoding: Pattern Recognition Challenges in Neuroimaging. p. 40. doi:10.1109/WBD.2010.9. ISBN 978-1-4244-8486-7.