Statistical regularity is a notion in statistics and probability theory that random events exhibit regularity when repeated enough times or that enough sufficiently similar random events exhibit regularity. It is an umbrella term that covers the law of large numbers, all central limit theorems and ergodic theorems.
If one throws a die once, it is difficult to predict the outcome, but if we repeat this experiment many times, we will see that the number of times each result occurs divided by the number of throws will eventually stabilize towards a specific value.
Repeating a series of trials will produce similar, but not identical, results for each series: the average, the standard deviation and other distributional characteristics will be around the same for each series of trials.
Observations of this phenomenon provided the initial motivation for the concept of what is now known as frequency probability.
This phenomenon should not be confused with the Gambler's fallacy, it only concerns regularity in the (possibly very) long run. Gambler's fallacy does not apply to statistical regularity because the latter considers the whole rather than individual cases.
||This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (February 2012) (Learn how and when to remove this template message)|
- Leon-Garcia, Albert (1994). Probability and Random Processes for Electrical Engineering (2nd ed.). Boston: Addison-Wesley. ISBN 0-201-50037-X.
- Whitt, Ward (2002). "Experiencing Statistical Regularity" (PDF). Stochastic-Process Limits, An Introduction to Stochastic-Process Limits and their Application to Queues. New York: Springer. ISBN 0-387-95358-2.