Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2011 June 15

From Wikipedia, the free encyclopedia
Mathematics desk
< June 14 << May | June | Jul >> June 16 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 15

[edit]

What's the probability of getting twenty heads in a row if you flip a coin one million times?

[edit]

Here's my reasoning: the twenty in a row could happen in the first twenty tosses, or between the second toss and the twenty first toss, and so on. So there are 10^6 - 19 ways to get twenty heads in a row. So the probability would be (10^6 - 19)/2^20 ~ 95%. Is this right? Incidentally, this isn't a homework question, just something that came up in a conversation. 65.92.5.252 (talk) 01:12, 15 June 2011 (UTC)[reply]

No, there are many more than ways to get twenty heads in a row. To see this, consider how many ways there are to have the first twenty tosses come up heads: the remaining tosses can come up in different ways! (Of course, some of these possibilities also include other runs of twenty heads in a row, besides the first twenty, so you'll need to correct for that overcounting; see Inclusion–exclusion principle.) Your denominator isn't right either—there are possible outcomes for the coin tosses, which is vastly larger than . —Bkell (talk) 01:25, 15 June 2011 (UTC)[reply]
Alright...so how would you do it? 65.92.5.252 (talk) 01:55, 15 June 2011 (UTC)[reply]
This is actually a hard problem. The solution to it is described on this page from Wolfram Mathworld -- it involves some pretty advanced concepts. Looie496 (talk) 02:25, 15 June 2011 (UTC)[reply]
Well, as you say there are possible places for a run of 20. If we treat each of these possibilities as independent (they aren't, but let's pretend), you get . Of course, as mentioned, they aren't independent: having tosses 1 to 20 all come up heads increases the odds that tosses 2 to 21 all come up heads. So outcomes with multiple runs of 20 should be over-represented compared to what you would expect if they were all independent. Since the expected total number of runs of 20 is the same, independent or not, this means that the actual answer to your question should be less than 61.5%.--Antendren (talk) 02:31, 15 June 2011 (UTC)[reply]
Thanks. 65.92.5.252 (talk) 15:59, 15 June 2011 (UTC)[reply]

The average number of rows of twenty heads if you flip a coin twenty times, is 2−20.

The average number of rows of twenty heads if you flip a coin a million times, is 999981 · 2−20 = 0.954.

The probability that the number is equal to k, is Pk = e−0.954 0.954k / k! = 0.385 · 0.954k / k!

    0     1     2      3      4       5
0.385 0.367 0.175 0.0557 0.0133 0.00254

The probability of not getting twenty heads in a row if you flip a coin one million times, is P0 = 0.385.

The probability of getting twenty heads in a row once or more, if you flip a coin one million times, is 1−P0 = 0.615. Bo Jacoby (talk) 12:56, 15 June 2011 (UTC).[reply]

Cool, thanks. 65.92.5.252 (talk) 15:59, 15 June 2011 (UTC)[reply]
Um, wrong. The events are not independent (for example, 1-20 all being heads is not independent of 10-30 all being heads), so the calculation is not valid. There simply is no easy answer to this question; the calculation is genuinely hard. Looie496 (talk) 18:40, 15 June 2011 (UTC)[reply]
See our article Perfectionism (psychology). The proper way to criticize a calculation is to improve it. Bo Jacoby (talk) 19:26, 15 June 2011 (UTC).[reply]
Well, I already said above that the solution is explained on this page. Looie496 (talk) 21:08, 15 June 2011 (UTC)[reply]
Yes, and what is your improved calculation? Bo Jacoby (talk) 06:46, 16 June 2011 (UTC).[reply]

(1 000 000 - 19) * 0.5^20 = 95.37..% Cuddlyable3 (talk) 08:59, 16 June 2011 (UTC)[reply]

No that's nowehere near the nswer. A fairly reasonable estimate is got by looking for the chance of not having any so the chance of having one at least would be about 1 - (1 - 1/2^20)^1000000 i.e. about 1-1/e or 1-0.37 or about 0.63 that's a very rough estimate and doesn't deal with the problem of overlap of intervals at all. Dmcq (talk) 11:14, 16 June 2011 (UTC)[reply]

Note: this page contains a very thorough explanation of the problem and how to solve it, by Mark Nelson. As it happens, he worked it out for the exact values given in the question here, and obtained an answer of approximately 37.9%. To get that number he had to set up a Java program that took about an hour to run. Looie496 (talk) 23:48, 16 June 2011 (UTC)[reply]

I think the answer can be computed exactly, but I haven't worked the details out yet. My approach would be to compute the number of coin configurations with one or more such rows using inclusion-exclusion. When you consider configurations with k rows, you deal with the overlap problem by ordering the rows you put in by hand from left to right and multiply the result by k!. You then get the following expression for the probablity:

This doesn't work, I do think however, that a derivation using generating functions should be straightforward.

Count Iblis (talk) 00:36, 17 June 2011 (UTC)[reply]

Very nice Looie496! Note that the formula from your link http://marknelson.us/2011/01/17/20-heads-in-a-row-what-are-the-odds/ is
where is defined in http://mathworld.wolfram.com/Fibonaccin-StepNumber.html . The author then spends an hour of computer time to compute
but he forgets to subtract from one, so the result is P = 0.621. The elementary approximation P ≃ 0.615 above is not "wrong" but amazingly precise! Bo Jacoby (talk) 10:34, 17 June 2011 (UTC).[reply]
No he doesn't. Note that and . So is the correct value. "DIv" doesn't represent what you're assuming it does.--Antendren (talk) 12:11, 17 June 2011 (UTC)[reply]

If anyone cares, my friend ran a computer simulation and got around a 33% success rate. 65.92.5.252 (talk) 16:23, 17 June 2011 (UTC)[reply]

Despite the problem having been "solved", the issue is still that one should be able to use paper, pencil and a not so powerful calculator and be able to compute the answer to arbitrary precision. In this respect, this problem has not been solved. Also, no simple derivation has been given, so that's another issue with the "solution".

So, let's try to find the solution using generating functions. We can count strings of heads and tails by giving both a head and a tail a weight of x. We then don't have to constrain the length of the string, we just need to extract the power of from the result. We can count all strings that don't contain rows of heads of 20 or more, by multiplying generating functions corresponding to r rows of heads of length 1 to 19 and tails of arbitrary length (zero to infinity at the start and end and 1 to infinity inbetween the r rows of heads) and summing over r.

For r = 0, we have

as we have only one row of arbitrary length of tails.

For r = 1, we have a row of tails of arbitrary length, a row of heads of length less than 20 and then another row of tails:

For r > 1, we have in addition to the two rows of tails of length zero or larger at the start and the end, r-1 rows of tails of length 1 or larger inbetween the rows of heads:

The generating function is thus:

Then to extract the coefficient of x^{10^{6}} is easy, just find the roots of the denominator, and expand the function in partial fractions. The root closest to the origin yields the dominant contribution. So, you can just approximate the roots numerically and then use simple calculus to estimate the coefficient of x^{10^{6}}. I found an answer close to that give above, but I made some approximations that I need to check. Count Iblis (talk) 18:20, 17 June 2011 (UTC)[reply]

is a root in the polynomial as . No other root is closer to the origin according to http://www.wolframalpha.com/input/?i=1-2x%2Bx^21 . Both numerator and denominator in is divisible by . Bo Jacoby (talk) 23:27, 17 June 2011 (UTC).[reply]
Ok, then what I have below should be correct. The answer depends sensitively on how far this root is removed from 1/2. Count Iblis (talk) 23:57, 17 June 2011 (UTC)[reply]

I only get the first 5 digits of the answer from Looie's link ( 0.379253961388950068663971868...) correct, so perhaps I'm making some stupid mistake  :( . This is what I did. The coefficient of x^N of a function f(x) can be written as:

Where the integration contour encircles only the pole at z = 0 counter clockwise. The point of this exercise is to avoid having to expand to large orders. The residue at zero is minus the sum of the residue at all the other poles (the contour integral over a circle with radius R obviously tends to zero if R is sent to infinity, so the sum of all residues is zero). Since those other poles are simple poles, calculating these is a piece of cake. We can also do this via a change of variables z --> 1/z. Then the integral becomes:

The pole at zero of f(z) is now at infinity and all the other poles are now inside the contour. In our case, we have:

If there is a simple pole at , then that makes a contribution to the probability (which is the coefficient divided by 2^N) of:

For N = 10^6, it is obvious that we only need to consider poles at points with a modulus close to 2. Now, there is pole at:

And assuming that this is the only one with a modulus close to 2, I find using my calculator (by using the log(1+x) function for numbers close to 1, the (exp(x) -1) function for small x whenever necessary to prevent loss of significant digits) that one minus the probability of no rows with 20 or more heads is 0.379251854769, which is not the correct answer. Perhaps I'm missing a pole... Count Iblis (talk) 23:50, 17 June 2011 (UTC)[reply]

Using Mathematica I do find the correct answer, it turns out that the root of the denominator was not determined accurate enough with my calculator (only the first few digits of the devation from 2 were accurate, so I guess I wasn't careful enough using my calculator when switching to 2 + x to avoid loss of significant digits; there was some spillover to the deviation from 2).

So, the first 3 hundred thousand digits or so of the probability are given by:

where is the zero closest to 2 of the polynomial:

And I think one can get 12 digits for the probability using a simple calculator by computing the root with some care. Count Iblis (talk) 01:07, 18 June 2011 (UTC)[reply]

I've now verified that I can get 10 significant digits correct doing all computations including the root finding process using my antique HP-28s calculator. Also, the probability function can be simplified using the equation for lambda to:

To find the root, you can write z = 2 + u, the equation becomes:

And then you can can use that (1+x)^N = exp[N Log(1+x)], so

So, we can then evaluate this using the log(1+x) function and the exp(x) -1 function (which are easy to program in case your calculator doesn't have them). The root finding function (which can also easily be done by hand using Newton Raphson) gives u = - 9.53683411446*10^{-7} which then yields the final answer to 10 digits accuracy, so I lost 2 significant digits in the whole process. I computed (lambda/2)^10^6 also using the Log(1+x) function by writing it as Exp[10^6 Log(1 + u/2)].

Count Iblis (talk) 02:36, 18 June 2011 (UTC)[reply]

Congratulations. Looie496 (talk) 03:56, 19 June 2011 (UTC)[reply]
Count Iblis, I am very impressed! How come that the correct result is close to one minus the approximate result obtained by ignoring the dependence? Is it generally so or is it only for 20 in a row and 1000000 flips? Bo Jacoby (talk) 11:13, 20 June 2011 (UTC).[reply]
That's indeed a curious thing worth looking into. 20 is a large number here in the sense that you have an equation z^21 - 2 z^20 + 1 with a root close to 2 and then (lambda/2)^N can be written as an exponential. The factor lambda(lambda-1)/(41 lambda - 40) is approximately 1. Then, if we just find the root approximately, we find that u is approximately minus 2^(20), and we get the estimate Exp[-2^(-21) N] for the probability of there being no rows of 20 or more heads, and this is already quite accurate. It seems to me that it should be possible to fix the handwaving arguments given above... Count Iblis (talk) 16:22, 21 June 2011 (UTC)[reply]
I just found the correct way to derive this approximation using the "handwaving" approach attempted by others above. I'll post that in a new thread, because this one is being archived, so it may not be noticed by everyone interested. Count Iblis (talk) 18:42, 21 June 2011 (UTC)[reply]

90!

[edit]

Greetings, learned amigops (: I took a recreational maths test once that had a question, what are the last two nonzero digits of 90!, in order from the left? (that is, if 90! were equal to 121, they would be 21) I did not get to this question, but looking back, how would I have solved it? WA gives a solution (12) but even simple calculators were not allowed, let alone alpha. I suppose I could have written 90!'s prime factorization out, crossed out 2 and 5 in pairs, and then multipled all the remaining numbers mod 10, but is there a faster and less tedious way to do this? (remembering that all I had at my disposal were literally a pen and paper). Cheers. 72.128.95.0 (talk) 03:36, 15 June 2011 (UTC)[reply]

De Polignac's formula may or may not be helpful-Shahab (talk) 05:53, 15 June 2011 (UTC)[reply]
Since we are only interest in the two LS digits that are non-zero, we can reduce the calculations to products of 2 digit numbers. Implement the following algorithm:
      t = 1
      FOR i = 2 TO 90
         t = t * i
         IF (t MOD 10) = 0 THEN t = t / 10
         DO WHILE (t MOD 10) = 0 : t = t / 10 : LOOP
         t = t MOD 100
      NEXT
which though tedious is doable by hand and gives the answer 52 (not 12). Is this right? -- SGBailey (talk) 11:37, 15 June 2011 (UTC)[reply]
That algorithm fails at i=25, 50, 75. You need to repeat t=t/10 while (t mod 10) = 0 still holds. Algebraist 12:25, 15 June 2011 (UTC)[reply]
Algorithm repaired a la Algebraist and this gives 12. -- SGBailey (talk) 13:01, 15 June 2011 (UTC)[reply]
From an IPython shell using scipy I get
In [1]: factorial(90, exact=1)
Out[1]: 14857159644817614973095227336208257378855699612846887669422168637049
85393094065876545992131370884059645617234469978112000000000000000000000L
So, it is 12. --Slaunger (talk) 12:01, 15 June 2011 (UTC)[reply]
I gather you wanted a way to do it by hand without a program. In that case, I would have noticed that there are a lot more powers of 2 than 5 in 90! and hence even after removing all the trailing zeros, the number will still be divisible by 4. Hence, we only need to work out the result mod 25. I would then write out every number with those divisible by 5 having their factors of 5 removed and then write this list out again taken mod 25. Then I would cancel out those which are multiplicative inverses (so 2 and 13 would cancel each other out, 3 and 17, etc (This might take a while to work out but can be done) and finally from what is left it should be easy enough to calculate the product by hand mod 25. Then multiply by 4 and you're done. --AMorris (talk)(contribs) 14:24, 15 June 2011 (UTC)[reply]
Don't multiply by 4 at the end. Just see which of the 25n+k is divisible by 4 where k is the result modulo 25. ANd by the way you can remove powers of 20 when calculating modulo 25. Dmcq (talk) 00:11, 16 June 2011 (UTC)[reply]
Just had another think of this and if I weas doing it by hand I wouldn't split into prime factors as suggested above. Instead I'd work base 25 as mentioned before and use Gauss's generalization of Wilson's theorem, the product of the numbers below 25 prime to 25 is -1 modulo 25. Applied three times up to 75 this can remove a whole pile of numbers quickly and then just remove powers of 5 and a corresponding number of powers of 2 from the remainder and multiply what remains modulo 25 - tht might be n appropriate time to split into factors. Dmcq (talk) 08:52, 16 June 2011 (UTC)[reply]

Riemann integral of a circle

[edit]

Is there a way to finish this sum of rectangles? For a semicircle of radius r cut into n slices, the area of the kth slice is:

The sum of all of them is:

And extending to infinity:

Somehow this has to equal , which means Is it possible to show that this is true? This isn't for homework, just curiosity. I've been able to do this with a parabola, cone, sphere, just about everything but a circle, so it seems weird to me that it's so difficult. KyuubiSeal (talk) 03:53, 15 June 2011 (UTC)[reply]

I haven't taken a look at the math, but there's an easier way to show that the area of a circle is πr^2: http://en.wikipedia.org/wiki/Area_of_a_disk#Onion_proof. 65.92.5.252 (talk) 03:58, 15 June 2011 (UTC)[reply]
Yeah, plus there's the 'chop into sectors and rearrange into almost-parallelogram' method. Also, the integral of the circumference is area? That looks like it works for spheres too. Weird... — Preceding unsigned comment added by KyuubiSeal (talkcontribs) 04:09, 15 June 2011 (UTC)[reply]

[wrong answer removed] Ignore me, I confused your sum for an integral ... in my defence it was about midnight when I posted that. 72.128.95.0 (talk) 15:11, 15 June 2011 (UTC)[reply]

Well, if you divided a circle like an onion, each segment would have a width dr. The area between r and r + dr is approximately (circumference at that point) * (width) = 2πrdr. Then, you can just integrate. (To the experts: please don't smite me for ignoring rigor). 65.92.5.252 (talk) 04:16, 15 June 2011 (UTC)[reply]

Maybe this has something to do with the Basel problem? That does create a pi from a sum, but I don't know how I would apply it. Or if it's even relevant. — Preceding unsigned comment added by KyuubiSeal (talkcontribs) 17:12, 15 June 2011 (UTC)[reply]

This site has the same sum. I agree that it would be nice to know if you can show it sums to using another technique. To me the series looks quite elegant, and clearly isn't difficult to derive, so I'm surprised it doesn't seem to be on the long list of Pi formulas at mathworld. Perhaps it is somehow trivially equivalent to some other series? 81.98.38.48 (talk) 21:13, 15 June 2011 (UTC)[reply]

This is the Riemann sum for

which is not an easy integral to evaluate. I know of the methods which involve integration by substitution of or or x = sech(t) with use of hyperbolic or trigonometric identities.(Igny (talk) 02:20, 16 June 2011 (UTC))[reply]

It should be possible to compare to a polygonal approximation of the arc length of the part of the unit circle in the first quadrant, and so prove that in the limit this Riemann sum tends to the arc length. (I think this only involves elementary inequalities.) Then, using the geometric definition of π as half the circumference of the unit circle, this should then show
which can then be used to evaluate the aforementioned limit. I've not really tried to write out the details of this, though. Sławomir Biały (talk) 14:14, 16 June 2011 (UTC)[reply]
Polygonal approximation? Like a circumscribed polygon? I still don't know how to get that. KyuubiSeal (talk) 20:31, 17 June 2011 (UTC)[reply]

Highest number

[edit]

Suppose there are N balls in a bin, each distinctly labeled with a number from 1 to N. I draw D balls from the bin, and note the highest numbered ball I get, which I call S. Is there a way to calculate the probability distribution of S? I wouldn't mind a general answer, but in the problem I'm dealing with, D<<N. 65.92.5.252 (talk) 17:07, 15 June 2011 (UTC)[reply]

There are different ways that a highest number S can arise. So the distribution is
--Sławomir Biały (talk) 18:49, 15 June 2011 (UTC)[reply]
Sorry, do you mind explaining how you got ? 65.92.5.252 (talk) 20:12, 15 June 2011 (UTC)[reply]

For a fixed S, the remaining balls are drawn from the set {1,...,S-1}. Sławomir Biały (talk) 20:53, 15 June 2011 (UTC)[reply]

Cotcha, thanks. 65.92.5.252 (talk) 09:39, 16 June 2011 (UTC)[reply]

If Visible Light Represented by People/Person, what's the Ratio

[edit]

Just a thought, if the ELECTROMAGNETIC SPECTRUM was represented by population/people, how many people would see the Light? --i am the kwisatz haderach (talk) 17:41, 15 June 2011 (UTC)[reply]

I think it would just be (highest wavelength of visible - lowest of visible) / (highest of spectrum - lowest of spectrum), then multiply by earths population. KyuubiSeal (talk) 18:19, 15 June 2011 (UTC)[reply]

The Universe on an eps on Nebulas, says Visible light is represented by only 1 inch on an electromagnetic scale of over 2000 miles. According to this show, I would take 63,630 inches/1 Mile x 2000 to get my 126.72 Million inches. With 1/126,720,000, w/worlds pop at 6.92 Bil, I take 6.92 Billion and divide by 126.72 Million to get about 55 People seeing the Light. --i am the kwisatz haderach (talk) 20:48, 15 June 2011 (UTC)[reply]