Talk:Probability distribution/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

CDF vs. PDF

For Continous Random Variable you ARE giving the Cumulative Distribution Function, not the Probability Density Function...eh?


The cdf is defined for all random variables, discrete or continuous, so it is a better starting point than either the probability function or the density function. In one case you use differences to get the probability function and in the other you use the derivative. Most students are introduced to the derivative before the integral, so this approach is a bit more accessible -- DickBeldin


The probability that a continuous random variable X takes a value less than or equal to x is denoted Pr(X<=x). The probability density function of X, where X is a continuous random variable, is the function f such that

  • Pr(a<=X<=b)= INTEGRAL ( as x ranges from a thru b) f(x) dx.

Correct, but F[b]-F[a] gives the probability of an interval directly without all the complications. We hide the complications in the cdf. It is inconvenient that we can't feature the explicit form of the cdf for many of the distributions we like to use, but it is important to build the concepts with proper spacing of the difficulties. One hurdle, then a straight stretch, then a curve, then another straight ... --DickBeldin----

You may present this material as you feel best. I don't disagree with your argument. But, mislabelling definitions is never okay. You have defined the probability density function for continuous random variables with the cumulative distribution function for the same. RoseParks

Surely you mean absolutely continuous. And defining the pdf from the cdf the right way to do things. If you want to go to first principles, you need to specify a Borel measure on the real numbers, and the best way to do that is using a Lebesgue-Stieltjes measure. In probability theory, you call measures distributions and the Lebesgue-Stieltjes measure is called the cdf. -- Miguel


Restriction to real-valued variables

The definitions given on this page seem much too limited. A probability distribution can be defined for random variables whose domain is not even ordered (take the multinomial, for instance). In these cases, the cumulative distribution function makes no sense. To claim, as this page does, that the distribution must have the reals as the domain is nonsense.

Agreed, but it seems to be customary to use this restricted interpretation of the domain. It is possible to define the cdf for vector-valued random variables (incuding your example) but this is very clumsy. Vector-valued functions are usually treated as collections of correlated real variables. -- Miguel

The Boltzmann distribution

The so-called Boltzmann distribution is a strange beast to include in the list of discrete distributions, as are all the "special cases" listed under it. The reason is that the Boltzmann distribution is just a rule that, given a collection of states (not necessarily a set of real numbers) and their energies (not necessarily all distinct) gives a probability measure on the collection of states. It can be applied to discrete and continuous collections of states, and especially in the discrete case there is no reason why the states should be labelled by real numbers. Some of the special cases, for instance the Maxwell-Boltzmann distribution, are not even discrete! — Miguel 21:32, 2004 Apr 24 (UTC)

  • All true, but it is still an important distribution. The fact that it has a strong relationship to physics does not single it out. --Pdbailey 01:35, 1 Sep 2004 (UTC)

Rare events

I think calling the Poisson and associated "counting" distributions as regarding 'rare random events' is not quite right. For example, counting decayse of Potassium-40 with a gamma spectrometer, one could count a hundred per second... I would edit it, but I can't come up with a better way of saying it. Can you?--Pdbailey 01:32, 1 Sep 2004 (UTC)

Nonetheless, they are rare in the sense intended. The reason they are Poisson-distributed is that there is only one decay out of each zillion or so opportunities. Michael Hardy 14:18, 2 Sep 2004 (UTC)
... and besides, if you were looking at some very long time -- say several seconds -- you'd probably want to model it as a normal distribution rather than as a Poisson distribution. Michael Hardy 14:19, 2 Sep 2004 (UTC)
Well, I understand what you are saying, but the point is to make it easier to understand. If I get 2 counts per second, then after 30 seconds, the normal distribution isn't going to do it. But the events are not rare. Me washing my car is rare, something that happens at 2 Hz is not rare. Pdbailey 05:17, 4 Sep 2004 (UTC)
The original meaning of rare in this context was that the probability of two events occurring simultaneously is zero. You may disagree with that use of the word, but we're stuck with it for historical reasons. I have heard that one of its first uses in the 19th century was to model the occurrence of army officers being kicked by horses.
Also, it becomes clear in what sense the Poisson distribution is rare if you look at its derivation as a limit of the Binomial.
Finally, whether two decay events per second are rare depends on the timescales involved. For you a second seems like a very short time, but when talking about subatomic physics a second is an eternity. Similarly, you think washing a car is a rare event bacause you measure the frequency per day, or per week. If you do it per year it ceases to be "rare" by your definition.
There are three time scales involved here: the time resolution of the experiment; the average time interval between events; and the unit of time used to express frequencies. The unit is irrelevant. If the resolution is much smaller than the interval, you use Poisson. If it is much larger, you use normal (as an approximation: Poisson is still exact). — Miguel 16:25, 4 Sep 2004 (UTC)
Your arguments (refered to by paragraph) do not hold water. (1) The defintin of rare is well given by the wiktionary as "very uncommon; scarce", and how somebody misued it centurys ago while describing this perticular distribution does not change the fact that this is misleading. (2) it becomes clear not the it is rare but that each iota of the Poisson distributed events is unlikely, but not rare. Phone calls ariving at a help center will often be frequent (think many per second) and are Poisson distributed. The chance that any given person called is very low. (3) we can dismiss this out of hand with the previous example.
The Poisson distribution is the limit of the Binomial distribution when the probability of success goes to zero (hence rare events) but the average number of successes per unit time is kept constant when taking the limit. Hence, rare events. If you don't get it, you don't get it.
There is such a thing as historical accidents, conventions and tradition in the way science, technology and all of human knowledge is organized. You have to live with that, and wikipedia is not the place to revolutionize notation or terminology. If you don't get it, you don't get it.
Miguel 03:02, 7 Sep 2004 (UTC)
Miguel, please explain to my how a call center that receives 20 calls per second is observing rare events? Read the definition, "scarce." 20 calls per second is hardly a drought. What I have changed it to, "which describes a very large number of individually unlikely events" is more accurate (captures the derivation from binomial distribution). If you have another wording that is more accurate, please, propose it. Pdbailey 04:43, 7 Sep 2004 (UTC)
I told you the unit in which you measure time is irrelevant. You are talking about .05 calls per millisecond.
Now seriously, your change is ínaccurate because the Poisson distribution can describe a very small number of individually unlikely events, too. The name "distribution of rare events" is something we're stuck with for historical reasons, and it is a synonim for "Poisson distribution". try this:
The Poisson family of distribution describes rare independent events and is parameterized by the average number of events occuring. Note that the average number of events can be large or small depending on the situatin, and that it is the individual events that are "rare".
Here's the problem: the Poisson distribution is as ubuquitous as the normal distribution and has meny applications. The intuitive explanation why the Poisson distribution applies in one particular situation may be "misleading" in another situation. The list of probability distributions is not the place to discuss those nuances, that's what the article Poisson distribution is for. — Miguel
Miguel, please answer this set of questions. Rare is a relative word, if you want to be clear, almost any other word would be better. Please expalain why you want to use it. You keep saying that we are stuck with it for historical reasons. What is your argument? why do we have to use it based on 'historical reasons.' What are the historical reasons?
We are stuck with it for reasons of tradition. That is the name it was given in old texts, and such things propagate as people copy each other. — Miguel 17:17, 2004 Sep 12 (UTC)
I do not like your definition because it is overly wordy, "note...can be...depending...and that it is..."
My definition is very tight and accurate, let me argue for it.
which describes a very large number of individually unlikely events that happen in a certain time interval.
This is inaccurate: it can describe a very small number of events, too. Can't you see that the number of events (per unit time) can be any positive number, large or small? Can't you see that simply changing the unit of time can make this average as large or small as you please? — Miguel 17:17, 2004 Sep 12 (UTC)
Poisson distributed events must be a large number (look at the proof on the page for the Poisson distribution) each of which must be unlikely (look at the proof on the page). The time interval bit diferentiates it from the Erlang distribution. Again, this definition is short, accurate, and even hints at the proof.
Finally, this page (discussion) is for the discussion of entries on this page. So long as the sentence is on this page, discussion about it belongs here.--Pdbailey 00:10, 11 Sep 2004 (UTC)
Do as you please, I don't care any more. — Miguel 17:17, 2004 Sep 12 (UTC)

other distributions

Are the Rayleigh distribution and Rician distribution important enough to be included in the List? -FZ 15:13, 6 Jan 2005 (UTC)

What about the Nakagami-m distribution? -Mangler 12:05, 26 July 2005 (UTC)

Zipf and Zipf-Mandelbrot

Zipf's law is for a finite n = number of elements or outcomes, for example, the number of words in the English language. When N becomes infinite, Zipf's law becomes the zeta distribution. Zipf-Mandelbrot law is a generalization of Zipf. When N becomes infinity, Zipf-Mandelbrot becomes something, I don't know what at this point, but whatever it is, it involves the Lerch transcendent function just as the zeta distribution involves the Riemann zeta function. PAR 08:04, 12 Apr 2005 (UTC)

You're right. Sorry about moving it back – I had looked at the support field in the infobox for Zipf's law and decided to recategorize the article here. The right thing to do for me would have been to fix the support field, which suggested that k ranges over the full set of natural numbers. I'm gonna work on that now. --MarkSweep 13:58, 12 Apr 2005 (UTC)

Ok, I'll go ahead with some plots for Zipf and zeta. PAR 15:06, 12 Apr 2005 (UTC)

The Economist

This page was featured in The Economist at Psychology - Bayes Rules.

From: Schaefer, Tom PAX Tecolote 
Sent: Monday, May 08, 2006 9:24 AM
To: 'robert.dragoset@nist.gov'
Cc: McDowell, Jeff HSV Tecolote
Subject: Uncertainty in Physical Values

Hi PhD Dragoset,

    I hope your presentation on units to OASIS went well, and the issue is 
being given the priority and urgency it deserves.

    The input values required for the execution of our cost estimating models 
are often uncertain, and rather than a discreet value, can be more accurately 
described as a range or distribution of values.  I would like you to consider 
helping to develop an XML standard for representing the uncertainty of a 
numeric value in a standard way that “Monte Carlo” and other analysis tools 
could interpret universally.  The intent would be to develop an XML element 
that could be used almost anywhere a quantitative attribute is currently used 
(the current discreet or point value being a subset of the element).  The 
parameters for common distribution types, such as uniform, triangular, 
Gaussian, beta, Poison, Weibull, etc. ( 
http://en.wikipedia.org/wiki/Probability_distribution ) would be supported, as 
well as a way to represent a data set of values to sample from.

    Do you have any interest in this topic?  It could have profound importance 
to the meaningfulness of data exchanges and quality of analysis.

Tom Schaefer
Senior Technical Expert
Tecolote Research, Inc.

Diagnostic Tool

I think there should be a section on randomness as a diagnostic tool in certain mathematical applications, such as regression etc. Just a thought... --Gogosean 20:58, 15 November 2006 (UTC)

Plots

The plots in general are quite illustrative and pretty. However, for the discrete distributions (Poisson, etc.) wouldn't it make more sense to have bar-like plots rather than connected points? The segements between the points have no meaning as far as the distribution is concerned; only the frequency value does. I understand it's tricky to superpose bar graphs, but there are ways around that, like outlining the bars or making separate plots. Also, the order of the eplots seems arbitrary. Why is the relatively obscure Skellam distribution near the top and the Gaussian at the very bottom? Shouldn't the beta and the gamma be closer? Shouldn't the t and F distributions be included? Or the binomial and negative-binomial? -- Eliezg 01:49, 9 December 2006 (UTC)

Lattice Distribution

The random variable takes value a+nb where n is integer, and b>0. This is a discrete distribution with infinite support. Jackzhp 14:48, 18 February 2007 (UTC)

symmetric vs. asymmetric distribution

Can someone please classify all the distributions into symmetric and assymmetric distributions? Jackzhp 15:05, 24 February 2007 (UTC)

Merge from dicrete probability distribution!

Please merge in any text that was missed.

In probability theory, a probability distribution is called discrete, if it is characterized by a probability mass function. Thus, the distribution of a random variable X is discrete, and X is then called a discrete random variable, if

as u runs through the set of all possible values of X.

If a random variable is discrete, then the set of all values that it can assume with non-zero probability is finite or countably infinite, because the sum of uncountably many positive real numbers (which is the smallest upper bound of the set of all finite partial sums) always diverges to infinity.

Typically, this set of possible values is a topologically discrete set in the sense that all its points are isolated points. But, there are discrete random variables for which this countable set is dense on the real line.

The Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution are among the most well-known discrete probability distributions.

Alternative description

Equivalently to the above, a discrete random variable can be defined as a random variable whose cumulative distribution function (cdf) increases only by jump discontinuities — that is, its cdf increases only where it "jumps" to a higher value, and is constant between those jumps. The points where jumps occur are precisely the values which the random variable may take. The number of such jumps may be finite or countably infinite. The set of locations of such jumps need not be topologically discrete; for example, the cdf might jump at each rational number.

Representation in terms of indicator functions

For a discrete random variable X, let u0, u1, ... be the values it can assume with non-zero probability. Denote

These are disjoint sets, and by formula (1)

It follows that the probability that X assumes any value except for u0, u1, ... is zero, and thus one can write X as

except on a set of probability zero, where and is the indicator function of A. This may serve as an alternative definition of discrete random variables.

Merge from continuous...

Ditto!

In probability theory, a probability distribution is called continuous if its cumulative distribution function is continuous. That is equivalent to saying that for random variables X with the distribution in question, Pr[X = a] = 0 for all real numbers a, i.e.: the probability that X attains the value a is zero, for any number a.

While for a discrete probability distribution one could say that an event with probability zero is impossible, this can not be said in the case of a continuous random variable, because then no value would be possible. This paradox is resolved by realizing that the probability that X attains some value within an uncountable set (for example an interval) cannot be found by adding the probabilities for individual values.

Under an alternative and stronger definition, the term "continuous probability distribution" is reserved for distributions that have probability density functions. These are most precisely called absolutely continuous random variables (see Radon – Nikodym theorem). For a random variable X, being absolutely continuous is equivalent to saying that the probability that X attains a value in any given subset S of its range with Lebesgue measure zero is equal to zero. This does not follow from the condition Pr[X = a] = 0 for all real numbers a, since there are uncountable sets with Lebesgue-measure zero (e.g. the Cantor set).

A random variable with the Cantor distribution is continuous according to the first convention, but according to the second, it is not (absolutely) continuous. Also, it is not discrete nor a weighted average of discrete and absolutely continuous random variables.

In practical applications, random variables are often either discrete or absolutely continuous, although mixtures of the two also arise naturally.

The normal distribution, continuous uniform distribution, Beta distribution, and Gamma distribution are well known absolutely continuous distributions. The normal distribution, also called the Gaussian or the bell curve, is ubiquitous in nature and statistics due to the central limit theorem: every variable that can be modelled as a sum of many small independent variables is approximately normal. —Preceding unsigned comment added by MisterSheik (talkcontribs)

The intial sentence is terrible!!

Here it is:

In probability theory, a probability distribution is a function of the probabilities of a mutually exclusive set of events.

That is idiotic nonsense. I had no idea this article was in such profoundly bad shape. I'm going to have to think about how to rephrase this. Michael Hardy 17:24, 7 May 2007 (UTC)

And now another introductory sentence

Now it says:

In probability theory, every random variable is a function defined on a state space equipped with a probability distribution that assigns a probability to every subset (more precisely every measurable subset) of its state space in such a way that the probability axioms are satisfied.

That makes sense, except as an introductory sentence. I'll think about what would be good in that role. Michael Hardy 22:17, 11 August 2007 (UTC)

When you say state space ("... of its state space in such a way ..."), do you mean sample space? A state space is what you are defining. From the first paragraph of the random variable article, "Formally, a random variable is a measurable function from a sample space to the measurable space of possible values of the variable". Also, given that this article and the random variable article will very likely be referenced together, it might be nice for the two definitions to be more obviously equivalent. —Preceding unsigned comment added by 74.211.70.98 (talk) 07:57, 8 September 2007 (UTC)

Also, although outside the scope of this article, state space is not very well defined. —Preceding unsigned comment added by 74.211.70.98 (talk) 08:22, 8 September 2007 (UTC)

Accessibility

"Probability distribution" is a term that many non-mathematicians encounter while reading about the application of statistics to non-mathematical subjects. However, the introductory paragraph of the article is completely incomprehensible to anyone not trained to a fairly high level in statistics / probability theory. Would it be possible to summarize the concept in less specialized language as well as giving the formal definition? This is a question and not a complaint.Spiridens (talk) 15:58, 18 November 2007 (UTC)

In view of the many comments regarding the technical nature of the intro, and of the high importance of this article, I took a stab at making it general and comprehensible. I'm not sure how well I succeeded, but there it is. Best, Eliezg (talk) 02:44, 29 November 2007 (UTC)

"Greater than" vs. "different": a discourse on dartboards

An IP changed the following sentence in the introduction:

"The probability of landing within the small area of the bullseye would (hopefully) be greater than landing on an equivalent area elsewhere on the board."

to:

"The probability of landing within the small area of the bullseye could be different than landing on an equivalent area elsewhere on the board."

I reverted the change, which was clearly made in good faith and is probably technically more accurate. The original wording is meant to indirectly justify the presumably unimodal distribution of dart landings. Even a very poor dart player would have a slightly higher chance of landing IN THE BULLSEYE than within a bullseye-sized area far away, while an excellent dart thrower (aiming for the bullseye) would have a smaller variance on the landing distribution. The parenthetical "hopefully" is unattractive, and perhaps a total rewording or a different example would be preferable, but the idea is that the reader would (hopefully!) have an intuitive feel for the two-dimensional continuous distribution of dart landings. The great thing about dartboards, after all, is that they record all of the aggregated data, and it is invariably clustered around the bullseye (while featuring prominent outliers near the ceiling and the floor and, perhaps, the bartender's bottom). Best, Eliezg (talk) 04:57, 25 December 2007 (UTC)


Is this a joke?

Please read the first paragraph; this is a graduate-level description of probability distribution, yet probability distribution is a topic even high school students can be expected to lookup on Wikipedia. I'd fix it, but I am not smart enough. Perhaps you can fix this? —Preceding unsigned comment added by 128.12.146.118 (talk) 01:08, 29 November 2007 (UTC)

someone has already made substantial improvements since I posted above; thanks! —Preceding unsigned comment added by 128.12.146.118 (talk) 07:50, 29 November 2007 (UTC)

There is a concept of formality in mathematics an informality. In Wikipedia, it is a convention to always give the formal definition in an article; the formal definition of a random variable is defined through some measure theory. If high-school students want to learn about PDF's, then they should rather consult a textbook.

Topology Expert (talk) 14:06, 24 August 2008 (UTC)

What on earth makes you think that? This is a general-purpose encyclopedia; if a topic can be understand by a wide group of people (which, granted, is not always possible), then I can't see that we should be over-formalising things so that they can't here. "Consult a textbook", if it were a valid argument, would be one against the whole of Wikipedia. If anything professional mathematicians are better placed to consult a paper source. I agree that the body of the article should cover the most technical definition possible, but elsewhere in the body and in the lead section there should also be an informal argument (which helps understand the technical one even where it can be understood in principle). Quietbritishjim (talk) 23:28, 25 August 2008 (UTC)

I agree with you that an informal argument (as well as a formal one) should be presented in the article. What I am saying is that in learning a mathematical concept, one must do exercises, see examples, think of his own problems and perhaps also discuss it with someone else (if necessary). Wikipedia does not provide these things so it would be more advantageous to learn from a textbook but understanding the concept through a different viewpoint (such as what Wikipedia says about the concept), also helps in the learning process.

Topology Expert (talk) 05:46, 27 August 2008 (UTC)


Links to source code & a Statistical Distribution Explorer from all Wikipedia Distribution entries.

I trust it will be in order to add the following link to the 'External links' for ALL statistical distributions:

  • Distribution Explorer is a mixed C++ and C# Windows application that allows you to explore the properties (mean, variance...) of most popular statistical distributions, and calculate the Cumulative Distribution Function CDF, Probability Distribution Function PDF, or quantiles/percentiles.

It is written using open-source C++ from the Boost Math Toolkit library which will be a useful tool for those using distributions in C++. The C++ code is templated on floating-point type (including user-defined types, typically 128 or 256-bit fixed, or arbitrary precision like NTL) allowing rather accurate values to be calculated if required.

(End of link).

My justification is that:

This Distribution Explorer conveniently provides nearly all the calculations one could want for about 20 distributions, and at rather higher accuracy than some similar tools.

(Although the Distribution Explorer is Windows only, this is a very popular platform: anyone who would like to produce a similar utility on other platforms is very welcome and has all the C++ code freely available).

The C++ open-source (and C#) has been peer reviewed and is included in the current Boost C++ library release. (Many of the Boost Libraries go on to become ISO Standard Libraries). These C++ functions are also used to implement the Math Special Functions specified in the *C++ ISO TR1.

So I believe that both the Distribution Explorer and the C++ source code are generally and widely useful tools, providing facilities not offered by others, and well worth links.

Paul A Bristow (talk) 14:46, 12 September 2008 (UTC)

I rather doubt it would be thought to be acceptable to place the same link in dozens of articles, or to put such a long entry in the list of external links for any article. I'd suggest it would be appropriate to list this package under Statistical software and under 'External links' in Probability distribution, in both cases limiting the entry to a single line. I think you should stop there. Qwfp (talk) 18:22, 12 September 2008 (UTC)

There are really two links: to the distribution explorer and to the C++ source code. Each has a two line description. This hardly seems excessive? (but could be 1 line? If the links are not in the individual distributions, how are readers expected to find them? If they search for a named distribution, they won't find them.

Paul A Bristow (talk) 11:39, 13 September 2008 (UTC)

Similar links are added to (and then deleted from) whole sets of articles quite often and these are often for pdf's and cdf's of distributions. One suggestion would be to have a separate article for online calculators for common distributions and so enable having just a single wikilink in each distribution article. Melcombe (talk) 10:37, 15 September 2008 (UTC)

This seems a good (and acceptable to me) suggestion. I am willing to start this, but I'd like to be confident that some wiki-vandal will not just remove it. (I note that I posted a proposal to insert links some months before actually doing it, and waiting to check that nobody objected, only to find someone promptly removed my links :-(. Paul A Bristow (talk) 12:45, 19 September 2008 (UTC)

Split

Should the List of important probability distributions be split off into its own article? It seems like a good idea, but I figured I would post here before doing it myself. Silly rabbit (talk) 14:13, 12 March 2008 (UTC)

There is a lot of overlap with List of probability distributions. One or the other needs tidying. Perhaps only a few very important ones should be listed in this page, leaving the job of a comprehensive list to the list page? Tayste (talk - contrib) 18:56, 12 March 2008 (UTC)
Oooo. There already is a list. I think the answer is to get rid of the list in this article (moving content to List of probability distributions as needed), and then to try to tie together the section here with prose. Silly rabbit (talk) 21:09, 12 March 2008 (UTC)
I am against removing the list. But it's o.k. to reduce its size here. The existence of the list in this page helped me realize that there are many distributions, and that they are divided into groups. I think that if, at all, we are going to split, then this article needs to have many references to the List of distributions, and highlight the existence of that page, or else, people will not be aware of this important and useful list. Sandman2007 (talk) 17:00, 3 April 2008 (UTC)

I agree with silly rabbit. There should be a list of different P.D.F's (as an ariticle on its own). A link to the new article should be provided in this article.

Topology Expert (talk) 11:17, 2 August 2008 (UTC)

Tying them together with prose is a darn good idea. II | (t - c) 07:29, 7 August 2008 (UTC)

OK, I've split that section of this page to the list page, combining the two and tweaking slightly in the process. If you want to flesh out Probability_distribution#List_of_probability_distributions again with prose, please do feel free to do so. It Is Me Here t / c 19:20, 24 November 2008 (UTC)

Lévy distribution

Among the pictures of the distributions on the right-hand side there is one with the caption 'Levy distribution'. It does however not actually show a Lévy distribution but a Lévy skew alpha-stable distribution which are two completely different things despite their similar name. This is confusing and should be fixed. The caption is missing the accent on the e anyways. 141.24.93.165 (talk) 08:02, 17 June 2008 (UTC)

Done. Eliezg (talk) 09:23, 17 June 2008 (UTC)

Opening paragraph

This is by far the worst opening paragraph I've read in a long time:

A probability distribution describes the values and probabilities associated with a random event. The values must cover all of the possible outcomes of the event, while the total probabilities must sum to exactly 1, or 100%. For example, a single coin flip can take values Heads or Tails with a probability of exactly 1/2 for each; these two values and two probabilities make up the probability distribution of the single coin flipping event. This distribution is called a discrete distribution because there are a countable number of discrete outcomes with positive probabilities.

The words "outcome" and "event" have precise meanings in probability theory. A probability distribution does not assign "probabilities" (plural) to "an event" (singular), and what the word "values" means is completely opaque, and a event is not a thing that has outcomes.

The paragraph deceives the reader. Michael Hardy (talk) 17:05, 20 August 2008 (UTC)

"Deceive" suggests that the author was malicious rather than incompetent: did you have any reason to suppose that? Richard Pinch (talk) 18:49, 20 August 2008 (UTC)

What I meant was that it leaves the reader worse off than if he had not read it, if the reader is credulous. Michael Hardy (talk) 19:36, 20 August 2008 (UTC)

In response to the above and the following comment (from my talk page):
I suppose I should pay attention to this article. This edit replaced a paragraph that was badly written with one that is false and in places incomprehensible because the terms don't make sense. What in the world does the word "values" mean in this case????? And to say it associates probabilities (plural) with "a random event" (singular) is clearly false. In the mean time, it ignores what probability distributions actually are. "The values must cover all of the possible outcomes of the event" misuses every important word within the sentence, and could only be written by someone who's heard the various words and ignores completely what they mean. Michael Hardy (talk) 16:49, 20 August 2008 (UTC)
I made an attempt to make a comprehensible first paragraph in response to several comments (see above) regarding the original lead's technical and incomprehensible nature. It was an initial stab and I am glad it is improved. It was based on attempts I've made in a university course I teach on quantitative methods in ecology that combine statistical methods with mathematical modeling to introduce the concepts of random variable and the probability distributions that describe them to students for whom such concepts are unfamiliar. It is nice that some of the sloppy use of the terms in the introduction have been disambiguated. In the sentence: "The values must cover all of the possible outcomes of the event" was meant to reflect the fact that the distribution is defined over the range of possible values for a given random variable. Thus, the roll of a fair die is, as I meant it, the "outcome" of an "event", the "possible values" of which are 1,2,3,4,5 and 6, and the associated probabilities of which are 1/6, 1/6, 1/6, 1/6, 1/6 and 1/6. The combination of possible outcomes and associated probabilities define the probability distribution. I would be interested to know more about the misuse of my terms. Thanks for you passionate concern about the quality of this obviously important article. Best, Eliezg (talk) 00:14, 23 August 2008 (UTC)

You're misusing the word "event". Events are the things to which probabilities are assigned. An event is a subset of the set of all possible outcomes. Throwing a die is NOT an event. Getting either a "5" or a "6" is an event. That is completely standard usage. Michael Hardy (talk) 02:36, 26 August 2008 (UTC)

Discrete distribution; finite supports

All the discrete distributions given under the section, 'Discrete distributions; finite supports', trivially have finite supports since they are defined on finite measure spaces. So to make it easier for the average person to understand, why not change the name of this section to 'Discrete random variables defined on finite sets'?

Perhaps also, it may be good to note later in the section on discrete distributions with finite support, that these discrete distributions are defined on measure spaces equipped with the discrete sigma algebra and the counting measure. In fact is particularly important to note that the measure is the counting measure as it is through this measure that 'beginners' in the topic evaluate probabilities of discrete distributions (they unknowingly do so, of course).

Topology Expert (talk) 14:20, 24 August 2008 (UTC)

The counting measure is not always sufficient for probability distributions with finite support. Oded (talk) 20:07, 24 August 2008 (UTC)

I am assuming that you are right but I am just interested to know what you mean by, 'the counting measure is not always sufficient for probability distributions with finite support'. Could you please tell me? Whenever one evaluates the probability that a random variable having the binomial distribution, the hypergeometric distribution, Poisson distribution etc... attains a value on a given set, one always uses the Lebesgue integral of the distribution function over the set. The Lebesgue integral cannot be defined without a measure, and the specific measure used is the counting measure and that always gives the correct probability.

Topology Expert (talk) 08:53, 25 August 2008 (UTC)

The random variable that takes the value 1 with probability p and 0 with probability cannot be defined using counting measure if is irrational. In this case, you don't need any sophisticated measure theory, of course. The Lebesgue integral is usually used with respect to the Lebesgue measure on . Each discrete distribution corresponds to a probability measure defined on a countable set. For such measure, the technicalities of measure theory do not arise. But counting measure is not sufficient. Oded (talk) 16:45, 25 August 2008 (UTC)

You are right. I was thinking that one defines the probability that a random variable will attain a value in a measurable subset of a measure space X, to be the Lebesgue integral of the PD function over that set. According to this, the counting measure on a measure space along with your PDF will yield the appropriate values for the probability. Could you please tell me why this definition is not used for the probabilities of a random varaible? I am quite sure that it is, but what you have said seems to contradict this.

Topology Expert (talk) 04:46, 26 August 2008 (UTC)

What you say would also work. But perhaps it is more natural to work with the probability measure itself, rather than represent it via the PDF, which is its Radon-Nykodim derivative with respect to the counting measure. Oded (talk) 05:22, 26 August 2008 (UTC)

The Theory of probability distributions Index/Menu

I think that the "Theory of probability distributions" should be down the right hand side of each page which it links to, similar to the Electromagnetics group of articles. —Preceding unsigned comment added by 202.7.183.131 (talk) 05:14, 17 October 2008 (UTC)

Archive 1