In science, an adversarial collaboration is a scientific experiment conducted by two groups of experimenters with competing hypotheses, with the aim of constructing and implementing an experimental design in a way that satisfies both groups that there are no obvious biases or weaknesses in the experimental design.
Adversarial collaboration has been recommended by Daniel Kahneman  and others as a way of resolving contentious issues in fringe science, such as the existence or nonexistence of extrasensory perception.
Philip Tetlock and Gregory Mitchell have discussed it in various articles. They argue:
Of course, what makes adversarial collaboration scientifically attractive—the prospect of breaking epistemic impasses—may also render it politically unattractive. Nothing will happen if either side decides that it is better off when there is less scientific clarity. For this reason, failures to broker adversarial collaborations are profoundly informative: they signal to the policy world that the American racism debate and the sub-debate on unconscious prejudice may be politicized beyond scientific redemption. Tetlock (2006) has offered rough sociology-of-science diagnostics for judging the odds of failures of this sort. Adversarial collaboration is most feasible when least needed: when the clashing camps have advanced testable theories, subscribe to common canons for testing those theories, and disagreements are robust but respectful. And adversarial collaboration is least feasible when most needed: when the scientific community lacks clear criteria for falsifying points of view, disagrees on key methodological issues, relies on second- or third-best substitute methods for testing causality, and is fractured into opposing camps that engage in ad hominem posturing and that have intimate ties to political actors who see any concession as weakness. Tetlock (2006) calls the former community as “epistemic Heaven.” the latter “epistemic hell,” and maintains—in the spirit of Figure 4—that if adversarial collaboration is indeed unnecessary in heaven and impossible in hell, we should expect the greatest expected returns in the “murky middle” in which theory-testing conditions are less than ideal but not yet hopeless.— 
- Kahneman, Daniel; Klein, Gary. Conditions for intuitive expertise: A failure to disagree. American Psychologist, Vol 64(6), Sep 2009, 515-526. doi: 10.1037/a0016755
- Wagenmakers, E.-J., Wetzels, R., Borsboom, D., & van der Maas, H. L. J. (2010). Why psychologists must change the way they analyze their data: The case of psi.
- Tetlock, Philip & Gregory Mitchell. 2009. "Implicit Bias and Accountability Systems: What Must Organizations Do to Prevent Discrimination?" Research in Organizational Behavior 29:3-38. Earlier version at 
|This science article is a stub. You can help Wikipedia by expanding it.|