In statistics, interval estimation is the use of sample data to calculate an interval of possible (or probable) values of an unknown population parameter, in contrast to point estimation, which is a single number. Neyman (1937) identified interval estimation ("estimation by interval") as distinct from point estimation ("estimation by unique estimate"). In doing so, he recognised that then-recent work quoting results in the form of an estimate plus-or-minus a standard deviation indicated that interval estimation was actually the problem statisticians really had in mind.
The most prevalent forms of interval estimation are:
Other common approaches to interval estimation, which are encompassed by statistical theory, are:
There is a third approach to statistical inference, namely fiducial inference, that also considers interval estimation. Non-statistical methods that can lead to interval estimates include fuzzy logic.
The scientific problems associated with interval estimation may be summarised as follows:
- When interval estimates are reported, they should have a commonly-held interpretation in the scientific community and more widely. In this regard, credible intervals are held to be most readily understood by the general public. Interval estimates derived from fuzzy logic have much more application-specific meanings.
- For commonly occurring situations there should be sets of standard procedures that can be used, subject to the checking and validity of any required assumptions. This applies for both confidence intervals and credible intervals.
- For more novel situations there should be guidance on how interval estimates can be formulated. In this regard confidence intervals and credible intervals have a similar standing but there are differences:
- credible intervals can readily deal with prior information, while confidence intervals cannot.
- confidence intervals are more flexible and can be used practically in more situations than credible intervals: one area where credible intervals suffer in comparison is in dealing with non-parametric models (see non-parametric statistics).
- There should be ways of testing the performance of interval estimation procedures. This arises because many such procedures involve approximations of various kinds and there is a need to check that the actual performance of a procedure is close to what is claimed. The use of stochastic simulations makes this is straightforward in the case of confidence intervals, but it is somewhat more problematic for credible intervals where prior information needs to taken properly into account. Checking of credible intervals can be done for situations representing no-prior-information but the check involves checking the long-run frequency properties of the procedures.
Severini (1991) discusses conditions under which credible intervals and confidence intervals will produce similar results, and also discusses both the coverage probabilities of credible intervals and the posterior probabilities associated with confidence intervals.
See also 
|Wikiversity has learning materials about Interval estimation|
- Algorithmic inference
- Induction (philosophy)
- Multiple comparisons
- Philosophy of statistics
- Point estimation
- Predictive inference
- Behrens–Fisher problem This has played an important role in the development of the theory behind applicable statistical methodologies.
- Neyman, J. (1937) "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability" Philosophical Transactions of the Royal Society of London A, 236, 333–380.
- Severini, T.A. (1991) "On the relationship between Bayesian and Non-Bayesian interval estimates" Journal of the Royal Statistical Society, Series B, 53 (3), 611–618
- Kendall, M.G. and Stuart, A. (1973). The Advanced Theory of Statistics. Vol 2: Inference and Relationship (3rd Edition). Griffin, London.
- In the above Chapter 20 covers confidence intervals, while Chapter 21 covers fiducial intervals and Bayesian intervals and has discussion comparing the three approaches. Note that this work predates modern computationally intensive methodologies. In addition, Chapter 21 discusses the Behrens–Fisher problem.