# Imprecise probability

Imprecise probability generalizes probability theory to allow for partial probability specifications, and is applicable when information is scarce, vague, or conflicting, in which case a unique probability distribution may be hard to identify. Thereby, the theory aims to represent the available knowledge more accurately. Imprecision is useful for dealing with expert elicitation, because:

• People have a limited ability to determine their own subjective probabilities and might find that they can only provide an interval.
• As an interval is compatible with a range of opinions, the analysis ought to be more convincing to a range of different people.

## Introduction

Uncertainty is traditionally modelled by a probability distribution, as argued by Kolmogorov,[1] Laplace, de Finetti,[2] Ramsey, Cox, Lindley, and many others. However, this has not been unanimously accepted by scientists, statisticians, and probabilists: it has been argued that some modification or broadening of probability theory is required, because one may not always be able to provide a probability for every event, particularly when only little information or data is available—an early example of such criticism is Boole's critique[3] of Laplace's work—, or when we wish to model probabilities that a group agrees with, rather than those of a single individual.

Perhaps the most straightforward generalization is to replace a single probability specification with an interval specification. Lower and upper probabilities, denoted by $\underline{P}(A)$ and $\overline{P}(A)$, or more generally, lower and upper expectations (previsions),[4][5][6][7] aim to fill this gap:

• the special case with $\underline{P}(A)=\overline{P}(A)$ for all events $A$ provides precise probability, whilst
• $\underline{P}(A)=0$ and $\overline{P}(A)=1$ represents no constraint at all on the specification of $P(A)$,

with a flexible continuum in between.

Some approaches, summarized under the name nonadditive probabilities,[8] directly use one of these set functions, assuming the other one to be naturally defined such that $\underline{P}(A^c)= 1-\overline{P}(A)$, with $A^c$ the complement of $A$. Other related concepts understand the corresponding intervals $[\underline{P}(A), \overline{P}(A)]$ for all events as the basic entity.[9][10]

## History

The idea to use imprecise probability has a long history. The first formal treatment dates back at least to the middle of the nineteenth century, by George Boole,[3] who aimed to reconcile the theories of logic (which can express complete ignorance) and probability. In the 1920s, in A Treatise on Probability, Keynes[11] formulated and applied an explicit interval estimate approach to probability.

Since the 1990s, the theory has gathered strong momentum, initiated by comprehensive foundations put forward by Walley,[7] who coined the term imprecise probability, by Kuznetsov,[12] and by Weichselberger,[9][10] who uses the term interval probability. Walley's theory extends the traditional subjective probability theory via buying and selling prices for gambles, whereas Weichselberger's approach generalizes Kolmogorov's axioms without imposing an interpretation.

Usually assumed consistency conditions relate imprecise probability assignments to non-empty closed convex sets of probability distributions. Therefore, as a welcome by-product, the theory also provides a formal framework for models used in robust statistics[13] and non-parametric statistics.[14] Included are also concepts based on Choquet integration,[15] and so-called two-monotone and totally monotone capacities,[16] which have become very popular in artificial intelligence under the name (Dempster-Shafer) belief functions.[17][18] Moreover, there is a strong connection[19] to Shafer and Vovk's notion of game-theoretic probability.[20]

## Mathematical models

So, the term imprecise probability—although an unfortunate misnomer as it enables more accurate quantification of uncertainty than precise probability—appears to have been established in the 1990s, and covers a wide range of extensions of the theory of probability, including:

## Interpretation of imprecise probabilities according to Walley

A unification of many of the above mentioned imprecise probability theories was proposed by Walley,[7] although this is in no way the first attempt to formalize imprecise probabilities. In terms of probability interpretations, Walley’s formulation of imprecise probabilities is based on the subjective variant of the Bayesian interpretation of probability. Walley defines upper and lower probabilities as special cases of upper and lower previsions and the gambling framework advanced by Bruno de Finetti. In simple terms, a decision maker’s lower prevision is the highest price at which the decision maker is sure he or she would buy a gamble, and the upper prevision is the lowest price at which the decision maker is sure he or she would buy the opposite of the gamble (which is equivalent to selling the original gamble). If the upper and lower previsions are equal, then they jointly represent the decision maker’s fair price for the gamble, the price at which the decision maker is willing to take either side of the gamble. The existence of a fair price leads to precise probabilities.

The allowance for imprecision, or a gap between a decision maker's upper and lower previsions, is the primary difference between precise and imprecise probability theories. Interestingly, such gaps arise naturally in betting markets which happen to be financially illiquid due to asymmetric information.

## Bibliography

1. ^ Kolmogorov, A. N. (1950). Foundations of the Theory of Probability. New York: Chelsea Publishing Company.
2. ^ a b de Finetti, Bruno (1974-5). Theory of Probability. New York: Wiley. Check date values in: |date= (help)
3. ^ a b c Boole, George (1854). An investigation of the laws of thought on which are founded the mathematical theories of logic and probabilities. London: Walton and Maberly.
4. ^ Smith, Cedric A. B. (1961). "Consistency in statistical inference and decision". Journal of the Royal Statistical Society B (23): 1–37.
5. ^ a b c Williams, Peter M. (1975). "Notes on conditional previsions". School of Math. and Phys. Sci., Univ. of Sussex.
6. ^ a b c Williams, Peter M. (2007). "Notes on conditional previsions". International Journal of Approximate Reasoning 44 (3): 366–383. doi:10.1016/j.ijar.2006.07.019.
7. Walley, Peter (1991). Statistical Reasoning with Imprecise Probabilities. London: Chapman and Hall. ISBN 0-412-28660-2.
8. ^ Denneberg, Dieter (1994). Non-additive Measure and Integral. Dordrecht: Kluwer.
9. ^ a b c Weichselberger, Kurt (2000). "The theory of interval probability as a unifying concept for uncertainty". International Journal of Approximate Reasoning 24 (2–3): 149–170. doi:10.1016/S0888-613X(00)00032-3.
10. ^ a b Weichselberger, K. (2001). Elementare Grundbegriffe einer allgemeineren Wahrscheinlichkeitsrechnung I - Intervallwahrscheinlichkeit als umfassendes Konzept. Heidelberg: Physica.
11. ^ a b c Keynes, John Maynard (1921). A Treatise on Probability. London: Macmillan And Co.
12. ^ Kuznetsov, Vladimir P. (1991). Interval Statistical Models. Moscow: Radio i Svyaz Publ.
13. ^ Ruggeri, Fabrizio (2000). Robust Bayesian Analysis. D. Ríos Insua. New York: Springer.
14. ^ Augustin, T.; Coolen, F. P. A. (2004). "Nonparametric predictive inference and interval probability". Journal of Statistical Planning and Inference 124 (2): 251–272. doi:10.1016/j.jspi.2003.07.003. edit
15. ^ de Cooman, G.; Troffaes, M. C. M.; Miranda, E. (2008). "n-Monotone exact functionals". Journal of Mathematical Analysis and Applications 347: 143–156. arXiv:0801.1962. Bibcode:2008JMAA..347..143D. doi:10.1016/j.jmaa.2008.05.071. edit
16. ^ Huber, P. J.; V. Strassen (1973). "Minimax tests and the Neyman-Pearson lemma for capacities". The Annals of Statistics 1 (2): 251–263. doi:10.1214/aos/1176342363.
17. ^ a b Dempster, A. P. (1967). "Upper and lower probabilities induced by a multivalued mapping". The Annals of Mathematical Statistics 38 (2): 325–339. doi:10.1214/aoms/1177698950. JSTOR 2239146.
18. ^ a b Shafer, Glenn (1976). A Mathematical Theory of Evidence. Princeton University Press.
19. ^ de Cooman, G.; Hermans, F. (2008). "Imprecise probability trees: Bridging two theories of imprecise probability". Artificial Intelligence 172 (11): 1400–1427. doi:10.1016/j.artint.2008.03.001. edit
20. ^ Shafer, Glenn; Vladimir Vovk (2001). Probability and Finance: It's Only a Game!. Wiley.
21. ^ Zadeh, L. A. (1978). "Fuzzy sets as a basis for a theory of possibility". Fuzzy Sets and Systems 1: 3–28. doi:10.1016/0165-0114(78)90029-5.
22. ^ Dubois, Didier; Henri Prade (1985). Théorie des possibilité. Paris: Masson.
23. ^ Dubois, Didier; Henri Prade (1988). Possibility Theory - An Approach to Computerized Processing of Uncertainty. New York: Plenum Press.
24. ^ de Finetti, Bruno (1931). "Sul significato soggettivo della probabilità". Fundamenta Mathematicae 17: 298–329.
25. ^ Fine, Terrence L. (1973). Theories of Probability. New York: Academic Press.
26. ^ Fishburn, P. C. (1986). "The axioms of subjective probability". Statistical Science 1 (3): 335–358. doi:10.1214/ss/1177013611.
27. ^ Ferson, Scott; Vladik Kreinovich, Lev Ginzburg, David S. Myers, Kari Sentz (2003). "Constructing Probability Boxes and Dempster-Shafer Structures". SAND2002-4015. Sandia National Laboratories. Retrieved 2009-09-23.
28. ^ Berger, James O. (1984). "The robust Bayesian viewpoint". In Kadane, J. B. Robustness of Bayesian Analyses. Elsevier Science. pp. 63–144.