Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2018 January 31

From Wikipedia, the free encyclopedia
Mathematics desk
< January 30 << Dec | January | Feb >> February 1 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 31[edit]

Expectation value and standard deviation of the largest of n draws from standard normal distribution[edit]

I only found closed form expressions for n = 2 and n = 3. Are there closed form expression for the expectation value and standard deviation for larger n or perhaps even closed form expressions for general n?

For n = 2, I find that the expectation value is and the standard deviation is .

For n = 3, the expectation value is , but I don't have a closed form for the standard deviation.

Count Iblis (talk) 03:35, 31 January 2018 (UTC)[reply]

Just a bit of clarification, when you say you found the values for n=2 and n=3, did you find the values in a book or online (if so where?) or were you able to derive them? Also, by standard normal I'm assuming you mean the distribution with density
and not the 'other' standard
--RDBury (talk) 08:35, 31 January 2018 (UTC)[reply]
I was able to derive the closed form expressions for the expected values for n=2 and n=3, and reduce the value for higher n to a single integral, but I wasn't able to get this into closed form for n=4. Using the same method I was able to get closed form expressions for the 2nd moment for n=2 and n=3 (new), but again it doesn't seem to simplify for n=4. I'll adopt the notation used in Normal distribution, namely
and
The CDF for the maximum of n values is and so the corresponding density function is
We're interested in the kth moments for these distributions:
specifically for k=1 for the expected value and for k=2 to get the standard deviation. It's clear that Also, using and integrating by parts
from which
(This is well known but it's good to have a bit of a warm-up.) For n>1 and k=1
and integrating by parts gives
For n=2 this is
as given above.
For n=3 this is
since is odd. Then
by the previous result.
For n=4 the integral is
which may simplify using some of the formulas in List of integrals of Gaussian functions, but I don't see the way forward atm. It appears the integrals get more and more difficult as n increases, so I'm guessing that while there may be some recursive formula, a closed form expression is unlikely.
For k=2 the integral is
and using integration by parts as before this becomes
For n=2 the integral
is 0 because the integrand is odd, therefor M(2, 2) = 1 agreeing with the value for the standard deviation given above.
For n=3 set
Then
and integrating by parts gives
so
Therefore and from which the standard deviation for the n=3 can be computed. (In this case I don't have an answer to compare to so take this value modulo errors in algebra. Maybe someone can check the work or do a quick computer simulation to verify.) Sorry if the post is over-long, but perhaps a bit of verbosity is acceptable since the forum has been a bit slow in the last week. --RDBury (talk) 14:06, 31 January 2018 (UTC)[reply]
You wrote "The CDF for the maximum of n values is ". Can you please explain why that's true or link an article or external resource which does? (I fixed your "[Normal distribution]" link and linked CDF. Hope that's OK.) -- ToE 17:50, 31 January 2018 (UTC)[reply]
That cannot be true, if only because that function is symmetric with respect to 0 (which it should not be - the expected value of a maximum of independent nontrivial variables is certainly superior to the expected value of any of the variables). I suspect an abuse of notation slipped into an incorrect formula. TigraanClick here to contact me 18:16, 31 January 2018 (UTC)[reply]
What? That's the cumulative distribution function, not the probability density function or error function. It and its powers (even n=2) won't be symmetric with respect to 0. The powers will shift a given cumulative probability farther right. It behaves as I'd expect the CDF for a maximum of n values would. I just don't see how to prove it. -- ToE 18:43, 31 January 2018 (UTC)[reply]
Huh, yes, sorry. Still, it would be good to know whether it is true. TigraanClick here to contact me 12:19, 1 February 2018 (UTC)[reply]
I actually gave this problem as an exercise for my students this semester (analytically for a uniform distribution, numerically for the normal distribution). We derived/justified the pdf as follows: x is the maximum sample value if (i) x is a sample value, this gives a factor with a combinatorial factor N because it doesn't matter where in the sample it appears; (ii) n-1 sample values must be , this gives a factor . This agrees with the derivative of the cdf given above. Our numerical results (for the expectation only) agree with those of Count Iblis, and it's nice to see to what extent the problem can be tackled analytically. --Wrongfilter (talk) 12:38, 1 February 2018 (UTC)[reply]
  • I tackled the problem from a different angle, with the trick to use weird integration domains instead of weird functions to integrate. The result involves an integral with powers of the error function; I am not sure whether a closed-form formula or even a recurrence relation exists, but others may weight in.
Computations collapsed for readability - TigraanClick here to contact me 18:12, 31 January 2018 (UTC)[reply]

Let be independent random variables with p.d.f. . To evaluate the moment of order k (in our case, we want k=0,1), we would usually integrate over (for all possible values of the ) the function .

The key insight is that instead, we can integrate the simpler function over the domain where X1 is the maximum, and divide by the weight of that domain (i.e. the integral of ) (that weight is 1 in the previous case, assuming the p.d.f. are normalized). Hence we get:

where the domain size is

Notice that the integration over all variables is independent (except for the bounds of i≥2 that depend on X1); moreover, all the Xi,i≥2 have the same p.d.f. so this simplifies drastically. Using the same notation as RDBury above with

and similarly

It all comes down to the p.d.f. in use. If it was uniform distribution over an interval, that is a simple polynomial integral, but the current case calls for a gaussian distribution, so is the error function (maybe with a scaling factor).

Final result: with RDBury's notations, . I am not sure how much further algebra-lifting and List of integrals of Gaussian functions can get you, but that is good enough that any computer should have no trouble to give you a 10-digit approximation for reasonable n/k. TigraanClick here to contact me 18:12, 31 January 2018 (UTC)[reply]
What an idiot... The integrand in the denominator has obviously the primitive , so it's easy to integrate:
So the result simplifies to . It looks like a good candidate for integration by parts, but for k≥1, the parts are infinite (the term will not converge, since the integrand does not approach 0 for large X). TigraanClick here to contact me 13:07, 1 February 2018 (UTC)[reply]
First, for a possibly simpler explanation of the cdf of the max of independent variables, this holds for any distribution and even holds if the variables have different distribution. Say variable X as cdf U and Y has cdf V. The the cdf for max(X, Y) is given by W(x) = P(max(X,Y)≤x) = P(X≤x and Y≤x) = P(X≤x)⋅P(Y≤x) (since the variables are independent) = U(x)⋅V(x). You can use induction to prove the corresponding statement for any finite number of variables.
Also, to follow up on some unfinished items above. I have since worked up a very basic Python program to estimate the values; basically it just finds the average of X, X2, X3 and X4 for 1,000,000 values of X, where X is the maximum of 2, 3 or 4 random values chosen with the standard normal distribution. This only produces about 2 or 3 digits of accuracy but that's good enough for a sanity check. The results match the value of M3,2 given above so I'm much more confident that it's the correct value.
I was also able to find the exact value of M4,1 as or about 1.029. I agree with Tigraan that to go much further it would be easier to use numerical methods, or at least use a computer algebra package to do the calculus. I've looked at Alpha for this but it seems they use the 'other' standard for the standard distribution and converting the Φ(x) I'm using to their erf(x) would be an issue. I\m not enough of a Mathematica user to know if there's a way around this. --RDBury (talk) 15:22, 1 February 2018 (UTC)[reply]
Thanks for looking into this! It looks less hopeless to try to find more exact expressions for the moments than when I first looked into this. Count Iblis (talk) 21:58, 4 February 2018 (UTC)[reply]

Negative and fractional exponents in cardinal exponentiation[edit]

What are the conditions for the existence of negative and fractional exponents in cardinal exponentiation? The info re this aspect from that article (section) is not very detailed! (Thanks!)--5.2.200.163 (talk) 15:30, 31 January 2018 (UTC)[reply]

The conditions are simple: You can't do it at all. Negative numbers and fractions are not cardinal numbers. --Trovatore (talk) 06:28, 1 February 2018 (UTC)[reply]
Then what is said there in article about roots and logarithms? When roots are involved, doesn't that automatically involve fractional exponents?
What exactly is the relation between cartesian products and cardinal exponents? Are negative and fractional exponents also excluded from cartesian products?--5.2.200.163 (talk) 15:55, 1 February 2018 (UTC)[reply]
No. Roots are the answer to the question "Given b and c, what is the value of a such that ?". Whenever there is a concept of exponentiation, it makes sense to talk about roots (even if they don't necessarily exist).
For positive real numbers, it so happens that fractions are a thing and the bth root of c is . For cardinal numbers, there are no fractional exponents, but it doesn't mean there is no concept of roots.
The relation between finite cartesian products and cardinal numbers is simple enough - for any set X, we have . For infinitary products this might get more complicated so I'd best leave that to Trovatore. In any case, I don't know of any notion of fractional or negative cartesian exponents. -- Meni Rosenfeld (talk) 19:15, 1 February 2018 (UTC)[reply]
The "Roots" subsection makes a simple and accurate assertion:

Assuming the axiom of choice and, given an infinite cardinal κ and a finite cardinal μ greater than 0, the cardinal ν satisfying will be κ.

That claim is true, and I suppose if you like, you can phrase it as "the μth root of κ is κ". However, I am not aware that anyone working in the field actually does phrase it that way, in practice.
The "logarithms" section, on the other hand, specifically defines "logarithm" as "the least cardinal number μ such that κ ≤ 2μ". Whether this definition is standard I cannot tell you. There are three sources cited but I do not have easy access to any of them. All I can tell you is that I do not ever recall seeing this notion referred to as "logarithm" in any serious set-theoretic work, so if you're planning to write a paper that references it you'd better say what you mean, rather than counting on the reader to guess.
In any case, standard terminology or not, this notion of "logarithm" does not require fractional exponents. The corresponding notion on ordinary numbers would not be "log base 2" but rather "ceiling of log base 2". --Trovatore (talk) 19:48, 1 February 2018 (UTC)[reply]