Per-comparison error rate
In statistics, per-comparison error rate (PCER) is the probability of a result in the absence of any formal multiple hypothesis testing correction. Typically, when considering a result under many hypotheses, some tests will give false positives; many statisticians make use of Bonferroni correction, false discovery rate, and other methods to determine the odds of a negative result appearing to be positive.
- Benjamini, Yoav; Hochberg, Yosef (1995). "Controlling the false discovery rate: a practical and powerful approach to multiple testing". Journal of the Royal Statistical Society, Series B 57 (1): 289–300. MR 1325392.