Talk:Normal distribution/Archive 1

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Updated section

I have refactored this page, and I have updated the "occurence" section of the main article. — Sun Aug 17 16:33 UTC 2003 Miguel

on terminology and layout

the "bell curve"

Should "Bell curve" be capitalized? I think not, because it is not the curve of Graham Bell, but it is a curve that looks like a bell. --AxelBoldt


You're right! --LMS

its shape resembles a bell, which has led to it being called the bell curve

Every reference describing normal distributions ultimately says this, and yet I have never seen a single normal distribution that I would describe first and foremost as bell-shaped. Some of them do perhaps mildly resemble some quite odd-shaped bells, but I think even these cases are a stretch. Does anyone else find this terminology silly, or, even better, know its historical development? --Ryguasu 10:22 Dec 2, 2002 (UTC)

I think most of them look like bells. They just don't have the hanging thing in the middle. -- TastyCakes

pdf, cdf, plots, tables

Why is there a table of the Gaussian cumulative distribution function here?

  • Wikipedia is not a repository of source texts,
  • it is not useful as an authoritative source (as anyone can edit), and
  • a graph would be much more informative about its properties.

-- Anon.

occurence of the normal distribution

challenges to conventional wisdom

I don't like the examples at all. If a species shows sexual dimorphism, the size of specimens won't be a gaussian, just like the text points out about human blood pressure. Also, test scores are basically an example of the Gaussian limit of Binomials, and GPAs certainly do not follow a gaussian distribution because of grade inflation and limited range of grade points.

To me, these examples smack of the fallacy that "everything is gaussian". See Zipfs law

My rewrite of this page is still under way, in any case. -- Miguel

You could add those counter examples to the list of variables that don't follow the Normal distribution.

I don't understand the point about IQ scores and binomials, though. Which binomials are you thinking of? AxelBoldt

Test scores and the binomial-to-gaussian limit

It's a bit more complicated than I made it sound, but the point is that the test score is basically a count of the number of correct answers, and therefore is a discrete variable like a Binomial and not a continuous variable like a Normal, so there is a Binomial-to-Normal limit involved. Here's the actual argument, written out:

Take a test that is composed of N True/False questions. Characterize a test-taker by their probability p of getting a right answer. Then their score will be a binomial B(N,p).

Now, consider a population of test takers. There will be a function F(p) on [0,1] which is the probability density that a randomly selected test-taker will have probability p of getting the right answer. Then, when you administer the test to a sample of test-takers, the probability distribution of the number of correct answers will be the convolution of B(N,p) and F(p), or

P(n) = int_[0,1] P(B(N,p)=n) F(p) dp

-- Miguel

If N is big enough, then the distribution of n/N should be pretty much the same as the distribution of p though (is that right?). So the truly interesting question is then: Why is p approximately normally distributed (or is it?). I would claim that it is, because of the central limit theorem (p pretty much describes the "intelligence" of a person, which is the result of many small mostly indepedent additive effects). AxelBoldt

If N is big enough we can use the central limit theorem to replace P(B(N,p)=n) by the density of a N(Np,Np(1-p)). In other words,

P(n/N=x) = int_[0,1] exp(-N(p-x)2/2p(1-p)) (N/2πp(1-p))1/2 F(p) dp

This is not exactly a gaussian convolution, but it comes close. The observed P(n/N) is a smoothed-out version of F(p). The two features of this that I want to stress are 1) we did use the theorem of de Moivre-Laplace; 2) the gaussian has variance p(1-p)/N, which means that the test performs best with very good or very bad test-takers, for whom the test is unnecessary anyway; 3) in the limit of infinite N you recover F(p) exactly.

Now, I don't know what the distribution of F(p) should be, but it has to be on [0,1]. The natural family of distributions on [0,1] is the Beta distribution, but that doesn't mean that it has to be a Beta.

I'll create an entry for the Beta shortly. -- Miguel

Normals and lognormals in biology

I was just reading

Huxley, Julian: Problems of Relative Growth (1932)

and the overwhelming biological evidence is that (as pointed out in the text) growth processes proceed by multiplicative increments, and that therefore body size should follow a lognormal rather than normal distribution. The size of plants and animals is approximately lognormal.

Of course, a sum of very many small lognormal increments approaches a normal, but except in the growth of inert appendages such as shells, hair, nails, claws and teeth, growth is best modeled by a multitude of random multiplicative increments.

One should not expect human height distributions in humans to be normal, but lognormal, and the usual statement that they are normally distributed is not supported by an application of the Central Limit Theorem.

--Miguel

statistical tests of the claims

How is it determined that IQ scores, heights, etc. are "approximately normal"? Does someone just collect a very large sample, plot a histogram, and go "wow - it looks like a bell!" I assume there are more formal methods. Also, does anyone have references for the studies concluding the approximate normality of the variables discussed in the article? --Ryguasu 15:35 Dec 10, 2002 (UTC)

The answer is that until someone comes up with a causation model that explain why it should be normal, it is just a guess. It is a common fallacy that "everything is Gaussian". -- Miguel

The tests to check whether a given distribution is normal (or lognormal etc.) are called "goodness of fit" tests. One simple minded approach is to divide the variable's range into subintervals, let the theoretical distribution predict the probabilities of the various subintervals, and compare those predictions to the observed frequencies with a chi-square test. I don't have references for the relevant studies. Once you have empirically verified that a given distribution is approximately normal, you can of course dream up all sorts of explanations, typically that the given variable can be seen as the result of many small additive influences. AxelBoldt 23:36 Dec 10, 2002 (UTC)
It's not clear to me that tests for normality indicate anything useful. Specifically I am referring to chi-square tests and Kolmogorov-Smirov tests; these and related tests are conventional null-hypothesis significance tests, and share the problems common to such tests. In particular, since it is known a priori that there are upper and lower limits to human height (for example), the distribution of human height cannot be normal, nor can it be log-normal. Thus any failure to reject the null hypothesis can be due only to lack of data. A test which tells you "you have a big data set" isn't very useful; it certainly isn't any more useful than looking at a histogram and saying, "Hmm, yes, looks normal to me". If we need to mention such tests, I'd direct the reader elsewhere. Disclaimer: I am a Bayesian; be that as it may, please respond to technical issues. Wile E. Heresiarch 20:28, 5 Jan 2004 (UTC)

Several people have indicated to me that there are indeed articles and/or books out there somewhere that go through real-life data in a number of domains, showing many instances of the normal distribution being a good approximate fit. Unfortunately, nobody seems to remember where they might have seen these discussions. Has anyone seen one? I would really appreciate seeing some real data, rather than just being told what seems to be roughly normal and what doesn't. --Ryguasu 21:14 Feb 10, 2003 (UTC)

Funny you should ask. I have exactly the same question. Every statistics textbook claims this, but they never seem to back it up with references. I posted the same question to usenet, and here is what I got. AxelBoldt 21:55 Feb 10, 2003 (UTC)

Somebody pointed me to History of Statistics books by Stigler, and they look promising. AxelBoldt 15:51 Feb 12, 2003 (UTC)

Axel, what did you find in Stigler's books? -- Miguel

Maximum-entropy argument

Isn't it true that the normal distribution N(μ,σ2) is the distribution with the largest entropy among all distributions with mean μ and variance σ2? That would make it the "default choice" in a certain sense. AxelBoldt

Yes, you can show that the distribution with the largest entropy given <X> and <X2> has density proportional to exp(-ax-bx2).

The entropy of the density f is the expected value of log f. Finding the maximum entropy given <x> and <x^2> is an easy variational problem with Lagrange multipliers. — Miguel 05:15, 2004 Apr 20 (UTC)

Proposal to reorder the examples

I'd like to suggest that the examples section be put in this order: (1) central limit theorem and approximation of other distributions by norml distribution; (2) exactly normal distributions found in nature; (3) close approximations to normal distributions found in nature; (4) non-normal distributions found in nature. I'd further suggest that we strike the IQ example. The examples section shouldn't contain distributions that you might possibly consider normal, if you're feeling charitable and you are willing to overlook some serious difficulties: the examples should be solid examples. Wile E. Heresiarch 20:28, 5 Jan 2004 (UTC)

  • I've reordered the examples as described above. To further improve the examples, I would suggest (1) merging the bullet list with the named sections; I don't think it buys us anything to have both, and (2) for each example, give a reference to a work in the appropriate field -- e.g., physics, economics, etc, not a statistics book. Wile E. Heresiarch 19:27, 11 Jan 2004 (UTC)

Part of the problem is that, other than mathematical applications of the central limit theorem, we have not been able to substantiate the common claim that all kinds of things are approximately normal. Axel actually undertook a search for actual successful tests of normality in the literature and found only claims without references. I would strike out the IQ example, too, but given the common association of IQ with "the Bell curve", we might have to spend some time debunking it. I replaced the claim that sizes of biological specimens are normal with the discussion of growth, which has been standard biology since the 1930's but still has not made it to statistics textbooks! I agree on the reorganization, but I would add (5) common misconceptions about normality. -- Miguel 12:11 6 Jan 2004

normals and multivariate random variables

I have a question: is the sum of two normal variables always normal, even if the two variables are not independent? (Let's treat constant variables as normal for now, with σ=0.) --AxelBoldt

I just got the answer from the EFnet irc channel #math: the sum does not have to be normal. To quote:

 <not_cub> How about two normals dependent in the following way, (means are 0). If X<0,
           choose Y in the middle 50-th percentiles. If X>0 choose Y outside. Then   
           clearly X+Y is not even symmetric

AxelBoldt

No. One can find two normals that are uncorrelated, i.e., their covariance is zero, the sum of which is not normal. Let X be a standard normal. Let Y be X or -X according as |X| is small or large; "large" means bigger than a specified number c. If c is big enough then cov(X,Y) < 0; if c is close to 0 then cov(X,Y) > 0; since cov(X,Y) depends continuously on c, there is an intermediate value of c for which cov(X,Y) = 0. X and Y are both standard normals.

Two random variables X, Y have a joint normal distribution if their joint distribution is such that for any two reals a and b the linear combination aX+bY is normal. A similar definition applies to more than two normals. The distribution of a vector of joint normals is determined by the means and the covariances (including the variances). The whole matrix of covariances is sometimes called the covariance matrix, but I prefer to call it the variance, since it's the natural higher-dimensional analog of the variance. It is always a non-negative-definite matrix. Michael Hardy 23:44 Jan 15, 2003 (UTC)

non-normal distributions

QUESTION: can some give me an example with nonnormal distribution and explain why it is not normal? (mia)

There are infinitely many such distributions. You might check out the binomial distribution. (It's not the best example, because with some parameters it can resemble a normal distribution. But it's relatively easy to understand.) Note that you should take care that "normal" here has two meanings:

  1. pertaining to the "Normal distribution"
  2. typical, standard, normal, etc..

The "Normal distribution" is normal is the only one that is normal in the first sense. Several different distributions (including the binomial distribution) can be considered normal in the second sense. I don't know if this is clear in the article. --Ryguasu 02:42 Jan 30, 2003 (UTC)

I advocate renaming the article from "Normal" to "Gaussian". We are perpetuating a 19th-century fallacy by sticking to the name "normal". Moreover, in generalizations the name "gaussian" is more common - nobody talks about normal stochastic processes, but gaussian stochastic processes. Also, people talk about non-gaussianity of their data when they do scientific work, possibly because "non-normality" sounds too similar to "abnormality" and should be de-emphasized. -- Miguel
I disagree, because the expression "Normal distribution" is, as far as I know, more common than "Gaussian distribution". Of course, "Gaussian" is a better name (as the article explains), but that is irrelevant. -- Jitse Niesen 22:01, 17 Aug 2003 (UTC)
I have a PhD in statistics, and I seldom see "Gaussian" and I see "normal" every day, and "non-normality" is the usual locution, whereas "non-gaussianity" is one I've never heard nor read. "Gaussian" is a misnomer since de Moivre would be the appropriate eponym. Michael Hardy 04:03, 18 Aug 2003 (UTC)
"Gaussian" has become the more common term in mathematical physics, which is where I come from. And I agree, that de Moivre would be the right name to use. By the way, I read the Doctrine of Chances for a term paper a long time ago, and it's great fun. Did you know that the law of large numbers proves the existence of God? -- Miguel
I never heard of the theological implications; how do those work? Physicists sometimes seem out of touch when they talk about probability. For example, it seems some of them define the "error function" as
\mbox{erf}(x)=\int_{-\infty}^x e^{-u^2}\,du
rather than as
\int_{-\infty}^x e^{-u^2/2}\,du.
Michael Hardy 14:22, 18 Aug 2003 (UTC)
Yes, it's rather hard to be a physicist and care about precision in probability theory, you quickly feel like an outcast... As for the Theological implications, let me quote the man himself:
What we have said is also applicable to a Ratio of Inequality, as appears from our 9th corollary. And thus in all cases it will be found that altho' Chance produces Irregularities, still the Odds will be infinitely great, that in the process of Time, those Irregularities will bear no proportion to the recurrency of that Order which naturally results from ORIGINAL DESIGN.
...Again, as it is thus demonstrable that there are, in the constitution of things, certain laws according to which Events happen, it is no less evident from Observation, that those Laws serve to wise, useful and beneficent purposes: to preserve the stedfast Order of the Universe, to propagate the several Species of beings, and furnish to the sentient Kind such degrees of happiness as are suited to their State.
But such Laws, as well as the original Design and Purpose of their Establishment, must all be from without; the Inertia of matter, and the nature of all created Beings, rendering it impossible that any thing should modify its own essence, or give to itself, or to anything else, an original determination or propensity. And hence, if we blind not ourselves with metaphysical dust, we shall be led, by a short and obvious way, to the acknowledgement of the great MAKER and GOVERNOR of all; Himself all-wise, all-powerful and good.
If I paraphrased it, I couldn't do it justice. -- Miguel
A Google search gives "gaussian distribution" 90,000 hits, and "normal distribution" 250,000. It seems to me that the top "gaussian" hits are more authoritative than the top "normal" hits, but that's hardly a compelling argument. -- Miguel

Thank you for your reply. I need an example for non normal distribution. It is for a project at college.We are on a chapter in Statistics about Gauss and how to get from discrete to continous variables and about the bell curve and normal distribution. Our teacher said that there are only 3 cases of not normal distribution and we shouldn't look in astronomy cause there everything is normal distributed. To help you understand better what I need: like the IQ scores is normally distributed on the bell curve. when we put the mean in the midlle 50% is on the left side and 50% on the right. and the bell curve is symmetrical. Well I need exactly the opposite. I need to write about something which is not normal distibuted and why it is not.from what i understood it will be a thing like temperature, height, IQ scores, etc. In the page about normal distribution it has 3 not normal cases but I don't understand why they are not normal and I don't know if they are correct. I hope I made more clear what I need and I hope you can help me on that. Thank you. (mia)

Oh, boy, your professor needs a clue. -- Miguel

Some context would be need before that "3 cases" comment can be understood. The waiting time until the arrival of the next phone call at a swithboard is usually modelled by a memoryless exponential distribution. That can serve as another example. An exponentially distributed random variable is always positive and is memoryless; normally distributed r.v.s do not have those properties. Michael Hardy 18:47 Jan 30, 2003 (UTC)

New comments not related to discussions above

I have made several edits, none major. Here I'll called out the ones that might be considered slightly controversial.

  • throughout: Gaussian/gaussian distribution -> normal distribution (use consistent terminology throughout article)
  • para 1: strike "Among people... (notably physics)" (lots of people in lots of fields use Gaussian or normal as the fancy strikes them -- there's no clean division and it's not necessary in this article to sort it all out)
  • under "Probability density function": strike (the "68 - 95.5 - 99.7 rule") (the quote marks suggest that this is a well known phrase, but I've never seen such a rule in a couple of decades of reading, and Google doesn't know about it either)
  • abbreviated "cumulative distribution function" to "cdf" throughout (I hope it's clearer this way)
  • reorganized discussion of cdf of standard and general cases
  • under "Generating Gaussian random variables": strike "beautiful" (the Box-Muller transform is not particularly striking; it certainly isn't any more so than any of the other results cited in this article; if anything here is beautiful it's the central limit theorem)
  • rephrased summary of central limit theorem

It's a good topic and I believe the technical part of the article is in good shape. I think the applications section needs more work, though. Have at it, Wile E. Heresiarch 18:41, 4 Jan 2004 (UTC)

Names of the distribution in different fields

Hello. At present the intro paragraph reads "Many physicists and engineers call it the Gaussian distribution, but normal distribution is the name by which it is known to probabilists and statisticians." If I'm not mistaken, both names are in general use in many fields. Can we really pin down who calls it what, and do we need to? -- How about if it reads "The normal distribution is also called the Gaussian distribution, especially in physics and engineering." Other sugggestions? Happy editing, Wile E. Heresiarch 22:02, 2 Apr 2004 (UTC)

Yes, I prefer your suggestion to the present text. --Zero 05:24, 3 Apr 2004 (UTC)
Well, I went ahead and changed the text as I suggested above. I really think that we needn't worry about who calls it what; maybe someone will be inspired to remove "physics and engineering" entirely. Wile E. Heresiarch 03:11, 8 Apr 2004 (UTC)

Everything is Gaussian?

What does 'everything is Gaussian' mean as used on this page? A Google search turns up... er... this page. I don't think it is at all clear from the article what it actually means. --Mysteronald 18:07, 19 Sep 2004 (UTC)

Well, what's intended is that the most common statistical tests work as advertised, to the extent that they work at all, if the distributions involved are exactly Gaussian, and approximately otherwise. (I'm thinking of the t-test and the F-test here.) Even though we really know that the distributions involved are never or maybe almost never Gaussian, people still use the same tests. I agree that statement about "everything is Gaussian" is confusing; maybe you or I can think of a better way to word it. Regards & happy editing, Wile E. Heresiarch 04:18, 21 Sep 2004 (UTC)