Jump to content

Outcome (probability)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Ltbdl (talk | contribs) at 15:48, 18 August 2023 (Adding local short description: "Possible result of an experiment or trial", overriding Wikidata description "a possible result of an experiment or trial, with all of the possible results of an experiment forming the elements of a sample space"). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In probability theory, an outcome is a possible result of an experiment or trial.[1] Each possible outcome of a particular experiment is unique, and different outcomes are mutually exclusive (only one outcome will occur on each trial of the experiment). All of the possible outcomes of an experiment form the elements of a sample space.[2]

For the experiment where we flip a coin twice, the four possible outcomes that make up our sample space are (H, T), (T, H), (T, T) and (H, H), where "H" represents a "heads", and "T" represents a "tails". Outcomes should not be confused with events, which are sets (or informally, "groups") of outcomes. For comparison, we could define an event to occur when "at least one 'heads'" is flipped in the experiment - that is, when the outcome contains at least one 'heads'. This event would contain all outcomes in the sample space except the element (T, T).

Sets of outcomes: events

Since individual outcomes may be of little practical interest, or because there may be prohibitively (even infinitely) many of them, outcomes are grouped into sets of outcomes that satisfy some condition, which are called "events." The collection of all such events is a sigma-algebra.[3]

An event containing exactly one outcome is called an elementary event. The event that contains all possible outcomes of an experiment is its sample space. A single outcome can be a part of many different events.[4]

Typically, when the sample space is finite, any subset of the sample space is an event (that is, all elements of the power set of the sample space are defined as events). However, this approach does not work well in cases where the sample space is uncountably infinite (most notably when the outcome must be some real number). So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events.

Probability of an outcome

Outcomes may occur with probabilities that are between zero and one (inclusively). In a discrete probability distribution whose sample space is finite, each outcome is assigned a particular probability. In contrast, in a continuous distribution, individual outcomes all have zero probability, and non-zero probabilities can only be assigned to ranges of outcomes.

Some "mixed" distributions contain both stretches of continuous outcomes and some discrete outcomes; the discrete outcomes in such distributions can be called atoms and can have non-zero probabilities.[5]

Under the measure-theoretic definition of a probability space, the probability of an outcome need not even be defined. In particular, the set of events on which probability is defined may be some σ-algebra on and not necessarily the full power set.

Equally likely outcomes

Flipping a coin leads to two outcomes that are almost equally likely.
A brass tack with point downward
Up or down? Flipping a brass tack leads to two outcomes that are not equally likely.

In some sample spaces, it is reasonable to estimate or assume that all outcomes in the space are equally likely (that they occur with equal probability). For example, when tossing an ordinary coin, one typically assumes that the outcomes "head" and "tail" are equally likely to occur. An implicit assumption that all outcomes are equally likely underpins most randomization tools used in common games of chance (e.g. rolling dice, shuffling cards, spinning tops or wheels, drawing lots, etc.). Of course, players in such games can try to cheat by subtly introducing systematic deviations from equal likelihood (for example, with marked cards, loaded or shaved dice, and other methods).

Some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely.[6] However, there are experiments that are not easily described by a set of equally likely outcomes— for example, if one were to toss a thumb tack many times and observe whether it landed with its point upward or downward, there is no symmetry to suggest that the two outcomes should be equally likely.

See also

References

  1. ^ "Outcome - Probability - Math Dictionary". HighPointsLearning. Retrieved 25 June 2013.
  2. ^ Albert, Jim (21 January 1998). "Listing All Possible Outcomes (The Sample Space)". Bowling Green State University. Archived from the original on 16 October 2000. Retrieved June 25, 2013.
  3. ^ Leon-Garcia, Alberto (2008). Probability, Statistics and Random Processes for Electrical Engineering. Upper Saddle River, NJ: Pearson. ISBN 9780131471221.
  4. ^ Pfeiffer, Paul E. (1978). Concepts of probability theory. Dover Publications. p. 18. ISBN 978-0-486-63677-1.
  5. ^ Kallenberg, Olav (2002). Foundations of Modern Probability (2nd ed.). New York: Springer. p. 9. ISBN 0-387-94957-7.
  6. ^ Foerster, Paul A. (2006). Algebra and Trigonometry: Functions and Applications, Teacher's Edition (Classics ed.). Upper Saddle River, NJ: Prentice Hall. p. 633. ISBN 0-13-165711-9.