# User:CD.Rutgers/Confidence Distribution

In Statistics, the concept of a confidence distribution (CD) has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was commonly associated with a fiducial[1] interpretation (fiducial distribution).

In recent years, there has been a surge of renewed interest in confidence distributions. In the more recent developments, the concept of confidence distribution has emerged as a purely frequentist concept, without any fiducial interpretation or reasoning. Conceptually, a confidence distribution is no different than a point estimator or an interval estimator (confidence interval), but it uses a sample-dependent distribution function on the parameter space (instead of a point or an interval) to estimate the parameter of interest.

A simple example of a confidence distribution, that has been broadly used in statistical practice, is a bootstrap distribution [2]. The development and interpretation of a bootstrap distribution does not involve any fiducial reasoning; the same is true for the concept of a confidence distribution. But the notion of confidence distribution is much broader than that of a bootstrap distribution. In particular, recent research suggests that it encompasses and unifies a wide range of examples, from regular parametric cases (including most examples of the classical development of Fisher's fiducial distribution) to bootstrap distributions, p-value functions [3], normalized likelihood functions and, in some cases, Bayesian priors and Bayesian posteriors. [4]

Just as a Bayesian posterior distribution contains a wealth of information for any type of Bayesian inference, a confidence distribution contains a wealth of information for constructing almost all types of frequentist inferences, including point estimates, confidence intervals and p-values, among others. Some recent developments have highlighted the promising potentials of the CD concept, as an effective inferential tool.

### The history of CD concept

Neyman (1937)[5] introduced the idea of "confidence" in his seminal paper on confidence intervals which clarified the frequentist repetition property. According to Fraser [6], the seed (idea) of confidence distribution can even be traced back to Bayes (1763)[7] and Fisher (1930) [1]. Some researchers view the confidence distribution as "the Neymanian interpretation of Fishers fiducial distribution"[8], which was "furiously disputed by Fisher" [9]. It is also believed that these "unproductive disputes" and Fisher's "stubborn insistence" (Zabell 1992)[9] might be the reason that the concept of confidence distribution has been long misconstrued as a fiducial concept and not been fully developed under the frequentist framework [4] [10]. Indeed, the confidence distribution is a purely frequentist concept with a purely frequentist interpretation.

## Definition

### Classical definition

Classically, a confidence distribution is defined by inverting the upper limits of a series of lower sided confidence intervals, see e.g. Efron [11] and Cox [12]. In particular,

Definition (Classical Definition)
For every α in (0,1), let (-∞,ξn(α)] be a 100α% lower-side confidence side interval for θ, where ξn(α) = ξn(Xn,α) is continuous and increasing in α for each sample Xn. Then, Hn(•)=ξn-1(•) is a confidence distribution for θ.

Efron (1993) [11] stated that this distribution "assigns probability 0.05 to θ lying between the upper endpoints of the 0.90 and 0.95 confidence interval, etc." and "it has powerful intuitive appeal". In the classical literature, the confidence distribution function is interpreted as a distribution function of the parameter θ, which is impossible unless fiducial reasoning is involved since, in a frequentist setting, the parameters are fixed and nonrandom.

To interpret the CD function entirely from a frequentist viewpoint and not interpret it as a distribution function of a (fixed/nonrandom) parameter is one of the major departures of recent development relative to the classical approach. The nice thing about treating confidence distribution as a purely frequentist concept (similar to a point estimator) is that it is now free from those restrictive, if not controversial, constraints set forth by Fisher on fiducial distributions.[4] [10]

### The modern definition

The following definition was formulated in Schweder and Hjort (2002)[8] and Singh, Xie and Strawderman (2001, 2005) [13] [14]. In the definition, Θ is the parameter space of the unknown parameter of interest θ and χ is the sample space corresponding to data Xn={X1,...,Xn}.

Definition
A function Hn(•) = Hn(Xn,•) on χ × Θ → [0,1] is called a confidence distribution (CD) for a parameter θ, if it follows two requirements: (R1) For each given Xn ∈ χ is a continuous cumulative distribution function on Θ; (R2) At the true parameter value θ=θ0, Hn0)≡Hn(Xn, θ0), as a function of the sample Xn, follows the uniform distribution U[0,1]. Also, the function H(•) is an asymptotic CD (aCD), if the U[0,1] requirement is true only asymptotically and the continuity requirement on Hn(•) is dropped.

In nontechnical terms, a confidence distribution is a function of both the parameter and the random sample, with two requirements. The first requirement (R1) simply requires that a CD should be a distribution on the parameter space. The second requirement (R2) sets a restriction on the function so that inferences (point estimators, confidence intervals and hypothesis testing, etc) based on the confidence distribution have desired frequentist properties. This is similar to the restrictions in point estimation to ensure certain desired properties, such as unbiasedness, consistency, efficiency, etc. [15][4]

Singh et al. (2005) [14] showed that a confidence distribution derived from inverting the upper limits of confidence intervals (classical definition) also satisfies the requirements in the above definition and that this version of the definition is consistent with the classical definition.

Example 1: Normal Mean and Variance

Suppose a normal sample Xi~N(μ, σ2), i=1,2,...,n is given.
(1) Variance σ2 is known
One can verify that both the functions ${\displaystyle H_{\Phi }(\mu )}$ and ${\displaystyle H_{t}(\mu )}$:
${\displaystyle H_{\Phi }(\mu )=\Phi \left({\frac {{\sqrt {n}}(\mu -{\bar {X}})}{\sigma }}\right)\quad {\mbox{and}}\quad H_{t}(\mu )=F_{t_{n-1}}\left({\frac {{\sqrt {n}}(\mu -{\bar {X}})}{s}}\right)}$
satisfy the two requirements in the CD definition, and they are confidence distribution functions for μ. Here, Φ is the cumulative distribution function of the standard normal distribution, and ${\displaystyle F_{t_{n-1}}}$ is the cumulative distribution function of the student ${\displaystyle t_{n-1}}$ distribution. Furthermore,
${\displaystyle H_{A}(\mu )=\Phi \left({\frac {{\sqrt {n}}(\mu -{\bar {X}})}{s}}\right)}$
satisfies the definition of an asymptotic confidence distribution when n→∞, and it is an asymptotic confidence distribution for μ. The uses of ${\displaystyle H_{t}(\mu )}$ and ${\displaystyle H_{A}(\mu )}$ are equivalent to state that we use ${\displaystyle N({\bar {X}},\sigma ^{2})}$ and ${\displaystyle N({\bar {X}},s^{2})}$ to estimate ${\displaystyle \mu }$, respectively.
(2) Variance σ2 is unknown
For the parameter μ, since ${\displaystyle H_{\Phi }(\mu )}$ involves the unknown parameter σ and it violates the two requirements in the CD definition, it is no longer a "distribution estimator" or a confidence distribution for μ. However, ${\displaystyle H_{t}(\mu )}$ is still a CD for μ and ${\displaystyle H_{A}(\mu )}$ is an aCD for μ.
For the parameter σ2, the sample-dependent cumulative distribution function
${\displaystyle H_{\chi ^{2}}(\theta )=1-F_{\chi _{n-1}^{2}}(s^{2}/\theta )}$
is a confidence distribution function for σ2. Here, ${\displaystyle F_{\chi _{n-1}^{2}}}$ is the cumulative distribution function of the student ${\displaystyle \chi _{n-1}^{2}}$ distribution.

Example 2: Bivariate Normal Correlation

Let ρ denotes the correlation coefficient of a bivariate normal population. It is well known that Fisher's z defined by the Fisher transformation:
${\displaystyle z={1 \over 2}\ln {1+r \over 1-r}}$
has the limiting distribution ${\displaystyle N({1 \over 2}\ln {{1+\rho } \over {1-\rho }},{1 \over n-3})}$ with a fast rate of convergence, where r is the sample correlation and n is the sample size.
One can verify that the function
${\displaystyle H_{n}(\rho )=1-\Phi \left({\sqrt {n-3}}\left({1 \over 2}\ln {1+r \over 1-r}-{1 \over 2}\ln {{1+\rho } \over {1-\rho }}\right)\right)}$
is an asymptotic confidence distribution for ρ.

## Using CD to make inference

### Confidence interval

From the CD definition, it is evident that the interval ${\displaystyle (-\infty ,H_{n}^{-1}(1-\alpha )],[H_{n}^{-1}(\alpha ),\infty )}$ and ${\displaystyle [H_{n}^{-1}(\alpha /2),H_{n}^{-1}(1-\alpha /2)]}$ provide 100(1-α)%-level confidence intervals of different kinds, for θ, for any α∈(0,1). Also ${\displaystyle [H_{n}^{-1}(\alpha _{1}),H_{n}^{-1}(1-\alpha _{2})]}$ is a level 100(1 - α1 - α2)% confidence interval for the parameter θ for any α1 >0, α2 >0 and α1 + α2 <1. Here, ${\displaystyle H_{n}^{-1}(\beta )}$ is the 100β% quantile of ${\displaystyle H_{n}(\theta )}$ or it solves for θ in equation ${\displaystyle H_{n}(\theta )=\beta }$. The same holds for an aCD, where the confidence level is achieved in limit.

### Point estimation

Point estimators can also be constructed given a confidence distribution estimator for the parameter of interest. For example, given Hn(θ) the CD for a parameter θ, natural choices of point estimators include the median Mn=Hn-1(1/2), the mean ${\displaystyle {\bar {\theta }}_{n}=\int _{-\infty }^{\infty }tdH_{n}(t)}$, and the maximum point of the CD density ${\displaystyle {\widehat {\theta }}_{n}=\arg \max _{\theta }h_{n}(\theta ),h_{n}(\theta )=H'_{n}(\theta )}$.

Under some modest conditions, among other properties, one can prove that these point estimators are all consistent.[16] [4].

### Hypothesis testing

One can derive a p-value for a test, either one sided or two sided, concerning the parameter θ, from its confidence distribution Hn(θ) (see, e.g., Singh et al. (2007)[16] and Xie et al. (2011)[4]). Denote by the probability mass of a set C under the confidence distribution function ${\displaystyle p_{s}(C)=H_{n}(C)=\int _{C}dH(\theta ).}$ This ps(C) is called "support" in the CD inference and also known as "belief" in the fiducial literature (cf., Kendall and Stuart, 1974 [17]). We have

(1) For the one-sided test K0: θ ∈ C vs. K1: θ ∈ Cc, where C is of the type of (-∞, b] or [b, ∞), supθ∈CPθ(ps(C)≤α)=α. Thus, ps(C)=Hn(C) is the corresponding p-value of the test.

(2) For the singleton test K0: θ=b vs. K1: θ≠b, P{K0: θ=b}(2 min{ps(Clo),ps(Cup)}≤α)=α, and 2 min{ps(Clo),ps(Cup)} = 2 min{Hn(b), 1-Hn(b)} is the corresponding p-value of the test. Here, Clo = (-∞, b] and Cup =[b, ∞).

See Figure 1 from Xie and Singh (2011) [4] for a graphical illustration of the CD inference.

## References

1. ^ a b c Fisher, R.A. (1930). "Inverse probability." Proc. cambridge Pilos. Soc. 26, 528-535.
2. ^ a b Efron, B. (1998). "R.A.Fisher in the 21st Century" Statistical Science. 13 95-122.
3. ^ a b Fraser, D.A.S. (1991). "Statistical inference: Likelihood to significance." J. Amer. Statist. Assoc. 86, 258-265.
4. Xie, M. and Singh, K. (2011). "On Confidence Distribution, the Frequentist Distribution Estimator of a Parameter." Draft review article. (invited)
5. ^ a b Neyman, J. (1937). "Outline of a theory of statistical estimation based on the classical theory of probability." Phil. Trans. Roy. Soc A237 333-380
6. ^ a b Fraser, D.A.S. (2011). "Is Bayes posterior just quick and dirty confidence?" Statistical Science In press.
7. ^ a b Bayes, T. (1973). "An essay towards solving a problem in the doctrine of chances." Phil. Trans. Roy. Soc, London 53 370-418 54 296-325. Reprinted in Biometrika 45 (1958) 293-315.
8. ^ a b c Schweder, T. and Hjort, N.L. (2002). "Confidence and likelihood." Scandinavian Journal of Statistics. 29 309-332.
9. ^ a b c Zabell, S.L. (1992). "R.A.Fisher and fiducial argument", Stat. Sci., 7, 369-387
10. ^ a b c Singh, K. and Xie, M. (2011). "Discussion on Professor Fraser's article on "Is Bayes posterior just quick and dirty confidence?"" Statistical Science In press.
11. ^ a b c Efron, B. (1993). "Bayes and likelihood calculations from confidence intervals. Biometrika. 80 3-26.
12. ^ a b Cox, D.R. (2006). Principle of statistical inference, Cambridge University Press.
13. ^ a b Singh, K. Xie, M. and Strawderman, W.E. (2001). Confidence distributions—concept, theory and applications. Technical report, Dept. Statistics, Rutgers Univ. Revised 2004.
14. ^ a b c Singh, K. Xie, M. and Strawderman, W.E. (2005). "Combining Information from Independent Sources Through Confidence Distribution" Ann. Statist., 33, 159-183.
15. ^ a b Xie, M., Liu, R.Y., Damaraju, C.V., and Olson, W.H. (2009). "Incorporating expert opinions with information from binomial clinical trials", Technical report, Dept. Statistics, Rutgers Univ. Submitted for publication.
16. ^ a b c Singh, K. Xie, M. and Strawderman, W.E. (2007). "Confidence Distribution (CD)-Distribution Estimator of a Parameter", in Complex Datasets and Inverse Problems IMS Lecture Notes-Monograph Series, 54,(R. Liu, et al. Eds) 132-150.
17. ^ a b Kendall, M., & Stuart, A. (1974). The advanced theory of statistics. (Chapter 21). Wiley.
18. ^ Fisher, R.A. (1973), Statistical Methods and Scientific Inference, 3rd edition. Hafner Press, New York.
19. ^ Neyman, J. (1941). "Fiducial argument and the theory of confidence intervals." Biometrika. 32 128-150.
20. ^ Parzen, E. (2005). All Statistical Methods, Parameter Confidence Quantiles. Noether Award Lecture at the Joint Statistical Meeting.
21. ^ Schweder, T. and Hjort, N.L. (2003). "Frequentist analogues of priors and posteriors." In B.P. Stigum, Econometrics and the philosophy of economics. Princeton University Press 285-317.
22. ^ Schweder, T. and Hjort, N.L. (2009). Confidence, Likelihood and Probability. Cambridge University Press. (forthcoming)
23. ^ Xie, M., Singh, K. and Strawderman, W.E. (2011). "Confidence distributions and a unified framework for meta-analysis". J. Amer. Statist. Assoc. Vol. 106 No. 493, 320-333.

## Bibliography

• Fisher, R A (1956). Statistical Methods and Scientific Inference. New York: Hafner. ISBN 0028447409.
• Fisher, R. A. (1955). "Statistical methods and scientific induction" J. Roy. Statist. Soc. Ser. B. 17, 69—78. (criticism of statistical theories of Jerzy Neyman and Abraham Wald from a fiducial perspective)
• Hannig, J. (2009). "On generalized fiducial inference". Statistica Sinica, 19, 491-544.
• Lawless, F. and Fredette, M. (2005). "Frequentist prediction intervals and predictive distributions." Biometrika. 92(3) 529-542.
• Lehmann, E.L. (1993). "The Fisher, Neyman-Pearson theories of testing hypotheses: one theory or two?" J. Amer. Statist. Assoc. 88 1242-1249.
• Neyman, Jerzy (1956). "Note on an Article by Sir Ronald Fisher". Journal of the Royal Statistical Society. Series B (Methodological) 18 (2): 288–294. http://www.jstor.org/stable/2983716. JSTOR 2983716 (reply to Fisher 1955, which diagnoses a fallacy of "fiducial inference")
• Schweder T., Sadykova D., Rugh D. and Koski W. (2010) "Population Estimates From Aerial Photographic Surveys of Naturally and Variably Marked Bowhead Whales" Journal of Agricultural Biological and Environmental Statistics 2010 15: 1-19
• Singh, K. and Xie, M. (2011). "CD-posterior -- combining prior and data through confidence distributions". A Festchrift in Honor of William E Strawderman. IMS-LNS Monogrph Series. (D. Fourdrinier, et al. Eds.). In press.