Jump to content

User:Mikenaaman/sandbox

From Wikipedia, the free encyclopedia

Almost Sure Hypothesis Testing or A.S. Hypothesis Testing utilizes almost sure convergence in order to determine the validity of a statistical hypothesis with probability one, w.p.1. This is to say that whenever the null hypothesis, , is true, then an A.S. hypothesis test will fail to reject the null hypothesis w.p.1 for all sufficiently large samples. Similarly, whenever the alternative hypothesis, , is true, then an A.S. hypothesis test will reject the null hypothesis with probability one, for all sufficiently large samples. Along similar lines, an A.S. confidence interval eventually contains the parameter of interest w.p.1.

Description

[edit]

For simplicity, assume we have a sequence of independent and identically distributed normal random variables, , with mean, , and unit variance. Suppose that nature or simulation has chosen the true mean to be , then the probability distribution function of the mean,, , is given by

where an Iversion bracket has been used. A naïve approach to estimating this distribution function would be to replace true mean on the right hand side with an estimate such as the sample mean, , but

which means the approximation to the true distribution function will be off by 0.5 at the true mean. However, is the 50% one sided confidence interval. More generally, let be the critical value of a one sided hypothesis test with significance level , then

If we set , then the error of the approximation is reduced by a factor of 10 around the true mean. Of course, if we let , then

However, this only shows that the expectation is close to the limiting value. Naaman (2016) showed that setting the significance level at with results in a finite number of type I and type II errors w.p.1 under fairly mild regularity conditions. This means that for each , there exists an , such that for all ,

where the equality holds w.p.1. So the indicator function of a one sided A.S. confidence interval is a good approximation to the true distribution function.

Simulation

[edit]

probability. Under suitable conditions, the sample mean, $\overline{x}$, can be used to construct a $95$\% confidence interval for the mean, $\mu$, and as the sample size grows


Comparison of p-values for 5% significance level test and A.S. test

\begin{align}

 \Pr\left(\left|\overline{x}-\mu\right| < \hat{\sigma}n^{-.5}1.96\right)  \rightarrow 0.95 
  \end{align} where $\hat{\sigma}^2$ is a consistent estimator of the variance. In this case, the probability that the $95$\% confidence interval contains $\mu$ approaches $0.95$. Many results in statistics focus on issues relating to the rejection of the null when it is false. However, Fisher, who introduced the term null hypothesis, did not even specify an alternative, instead focusing on a well defined null, see \cite{r10}.  In this paper, we will be primarily focused on hypothesis testing that performs well, regardless of the validity of the null.


Applications

[edit]

Optional Stopping

[edit]

For example, suppose a researcher performed an experiment with a sample size of 10 and found no statistically significant result. Then suppose she decided to add one more observation, and retest continuing this process until a significant result was found. Under this scenario,\footnote{A similar process is considered by \cite{r25} for a simulation in the context of animal testing. } given the initial batch of 10 observations resulted in an insignificant result, the probability that the experiment will be stopped at some finite sample size, , can be bounded using Boole's inequality

where $. This compares favorably with fixed significance level testing which has a finite stopping time with probability one; however, this bound will not be meaningful for all bandwidths, as the above sum can be greater than one (the bandwidth in Eq. (\ref{bandwidth}) would be one example). But even using that bandwidth, if the testing was done in batches of 10, then

which results in a relatively large probability that the process will never end.

Publication Bias

[edit]

As another example of the power of this approach, if an academic journal only accepts papers with p-values less than 0.05, then roughly 1 in 20 independent studies of the same effect would find a significant result when there was none. However, if the journal required a minimum sample size of 100 and a maximum bandwidth given by , then one would expect roughly 1 in 250 studies would find an effect when there was none (if the minimum sample size was 30, it would still be 1 in 60). If the maximum bandwidth was given by (which will have better small sample performance with regard to type I error when multiple comparisons are a concern), one would expect roughly 1 in 10000 studies would find an effect when there was none (if the minimum sample size was 30, it would be 1 in 900).

Jeffreys-Lindley Paradox

[edit]

Lindley's paradox occurs when

  1. The result is "significant" by a frequentist test of , indicating sufficient evidence to reject , say, at the 5% level, and
  2. The posterior probability of given is high, indicating strong evidence that is in better agreement with than .

However, the paradox does not apply to A.S. hypothesis tests. The Bayesian and the frequentist will eventually reach the same conclusion.

See also

[edit]

References

[edit]
  • Naaman, Michael (2016). "Almost sure hypothesis testing and a resolution of the Jeffreys-Lindley paradox". Electronic Journal of Statistics. 10 (1): 1526–1550.

Category:Statistical hypothesis testing

Category:Bayesian statistics